question
stringlengths
15
150
question_id
int64
258k
2.12M
score
int64
1
208
link
stringlengths
60
131
body
stringlengths
101
9.72k
answers
listlengths
1
288
verbolised
stringlengths
650
683k
Basic proof that $GL(2, \mathbb{Z})$ is not nilpotent
2,089,876
7
https://math.stackexchange.com/questions/2089876/basic-proof-that-gl2-mathbbz-is-not-nilpotent
<p>I need to show that $GL(2,\mathbb{Z})=\left\{ \begin{pmatrix} a&amp;b \\ c&amp;d \end{pmatrix} \mid a,b,c,d \in \mathbb{Z}, ad-bc = \pm 1\right\}$ is not nilpotent. I have seen <a href="https://math.stackexchange.com/questions/1563522/gl2-mathbbz-and-nilpotency">this question</a> but the answer given there is too advanced for where I am currently in my studies.</p> <p>In order to show something is not nilpotent, all I have at my disposal is to show that the upper central series does not terminate or that it is not solvable (using a derived series that does not terminate or a subnormal series that either does not terminate or whose factors are not all abelian.)</p> <p>This seems a little messy, and I've never used a direct approach to show that something was not nilpotent.</p> <p>What is the best approach to this proof without appealing to $p$-groups, Sylow Groups, Galois Theory, or free abelian groups?</p> <p>Thank you.</p>
[ { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" }, { "answer_id": 2089961, "score": 12, "text": "<p>One possibility is to show that there is a proper, self-normalizing subgroup,\n$$\n\\left\\{ \\begin{pmatrix} a&amp;b \\\\ 0&amp;d \\end{pmatrix} : a, d = \\pm 1, b \\in \\mathbb{Z}\\right\\}\n$$\nbeing perhaps the <a href=\"https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257\">easiest candidate</a>.</p>\n" }, { "answer_id": 2089998, "score": 12, "text": "<p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\\Bbb{F}_2)$ as the smallest non-nilpotent group.</p>\n" }, { "answer_id": 2091141, "score": 5, "text": "<p>Let's find the upper central series. For ease of notation, let \n$$ T_k = \\begin{pmatrix} 1 &amp; k\\\\0 &amp; 1\\end{pmatrix}$$\nand\n$$ R = \\begin{pmatrix} 0 &amp; 1 \\\\ 1 &amp; 0\\end{pmatrix}$$\nbe translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let\n$$ X = \\begin{pmatrix} a &amp; b \\\\ c &amp; d\\end{pmatrix}$$\nthen $X$ being in the center implies\n\\begin{align}\nT_kXT_{-k} &amp;= X\\\\\nRXR &amp;= X\n\\end{align}\nThis is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\\{I, -I\\}$, with $I$ the $2\\times2$ identity matrix.</p>\n\n<p>What about the second center, $Z_2$? These are elements that are central \"mod $Z_1$\", or elements $X$ such that <strong>for all</strong> $Y\\in GL(2,\\mathbb{Z})$, we have\n$$ YXY^{-1} = \\pm X$$\n[Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p>\n\n<p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\\mathbb{Z})$ thus is not nilpotent.</p>\n\n<p><strong>EDIT</strong>: Adding in more details.</p>\n\n<h2>First center</h2>\n\n<p>If $X$ is in the center of $GL(2,\\mathbb{Z})$, then it commutes with all matrices. In particular, we have\n$$ T_kXT_{-k}=X$$\nfor all $T_k$. Expanding that out, we get four equations:\n\\begin{align}\na+ck &amp;= a\\\\\nb+k(d-a-ck) &amp;= b\\\\\nc &amp;= c\\\\\nd-ck &amp;= d\n\\end{align}\nSince this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$:\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = \\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}</p>\n\n<p>Thus, our center is $\\{I, -I\\}$ (we actually need to check these commute with all matrices, but that's easy).</p>\n\n<h2>Second center</h2>\n\n<p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, \"mod $Z_1$\". This means that if $X\\in Z_2$, and $Y\\in GL(2,\\mathbb{Z})$, then there exists $Z_Y\\in Z_1$ such that\n$$ YXY^{-1} = Z_YX$$\nThe element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\\subset Z_1$).</p>\n\n<p>Let's find the second center. If $X\\in Z_2$, then for every $k$, there exists $Z_k\\in Z_1=\\{I, -I\\}$ such that\n$$ T_kXT_{-k} = Z_kX$$</p>\n\n<p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$)\n$$ a + ck_1 = a + ck_2$$\nwhich again implies $c=0$. Using $R$ we have\n\\begin{equation}\n\\begin{pmatrix}\nd &amp; c\\\\\nb &amp; a\n\\end{pmatrix} = Z_R\\begin{pmatrix} a &amp; b\\\\c &amp; d\\end{pmatrix}\n\\end{equation}\nwhere $Z_R$ is $I$ or $-I$. Thus we get $b=\\pm c=0$, and $a=\\pm d$.</p>\n\n<p>That means the only thing left to show that $Z_2=Z_1=\\{I, -I\\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix\n$$ \\begin{pmatrix}\n-1 &amp; 0\\\\\n0 &amp; 1\n\\end{pmatrix}\n$$\nis not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p>\n" }, { "answer_id": 2091072, "score": 1, "text": "<p>Because I'm lazy, I'll write $G = \\operatorname{GL}(2,\\mathbb{Z})$.</p>\n\n<p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \\{I,-I\\}$ where $I$ is the identity matrix.</p>\n\n<p>Now, consider the upper central series:\n$$1 = Z_0 \\trianglelefteq Z_1 \\trianglelefteq Z_2 \\trianglelefteq \\cdots \\trianglelefteq Z_i \\trianglelefteq \\cdots$$\nwhere $$Z_{i+1} = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_i\\}= \\{ x \\in G \\mid \\forall y \\in G: x^{-1}y^{-1}xy \\in Z_i\\}.$$ In particular, observe that\n$$Z_1 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_0\\} = \\{ x \\in G \\mid \\forall y \\in G: xy = yx \\} = Z(G) = \\{I,-I\\}.$$\nNow, onwards to $Z_2$. \n$$Z_2 = \\{ x \\in G \\mid \\forall y \\in G: [x,y] \\in Z_1\\} = \\{ x \\in G \\mid \\forall y \\in G: (xy = yx \\text{ or } xy = -yx) \\}.$$\nLet us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives \n$$\\operatorname{trace}(y) = \\operatorname{trace}(-x^{-1}yx) = -\\operatorname{trace}(x^{-1}yx) = -\\operatorname{trace}(xx^{-1}y) = -\\operatorname{trace}(y)$$ hence $\\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\\operatorname{trace}(y) \\neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p>\n\n<p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \\{I,-I\\}$ for all $i \\geq 1$. </p>\n\n<p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p>\n\n<p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p>\n\n<p>Now, let $x = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}$. Then we must have that\n$$\\begin{pmatrix}x_1 &amp; x_1+x_2\\\\x_3 &amp; x_3 + x_4\\end{pmatrix} = \\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix}\\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix} = \\begin{pmatrix}1 &amp; 1\\\\\n0 &amp; 1\\end{pmatrix}\\begin{pmatrix}x_1 &amp; x_2\\\\x_3 &amp; x_4\\end{pmatrix} = \\begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\\\x_3 &amp; x_4\\end{pmatrix},$$\nhence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\\begin{pmatrix}1 &amp; 0\\\\\n1 &amp; 1\\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \\in G$, then the only matrices satisfying this are $I$ and $-I$.</p>\n" } ]
Question: Basic proof that $GL(2, \mathbb{Z})$ is not nilpotent Answer 1: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 2: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 3: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 4: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 5: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 6: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 7: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 8: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 9: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 10: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 11: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 12: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 13: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 14: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 15: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 16: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 17: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 18: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 19: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 20: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 21: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 22: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 23: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 24: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 25: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 26: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 27: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 28: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 29: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 30: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 31: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 32: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 33: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 34: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 35: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 36: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 37: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 38: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 39: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 40: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 41: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 42: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 43: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 44: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 45: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 46: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 47: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 48: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 49: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 50: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 51: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 52: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p> Answer 53: <p>One possibility is to show that there is a proper, self-normalizing subgroup, $$ \left\{ \begin{pmatrix} a&amp;b \\ 0&amp;d \end{pmatrix} : a, d = \pm 1, b \in \mathbb{Z}\right\} $$ being perhaps the <a href="https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f/2087257#2087257">easiest candidate</a>.</p> Answer 54: <p>Another possibility is to show that the group has a non-nilpotent homomorphic image. Consider the reduction modulo two mapping, and identify $GL_2(\Bbb{F}_2)$ as the smallest non-nilpotent group.</p> Answer 55: <p>Let's find the upper central series. For ease of notation, let $$ T_k = \begin{pmatrix} 1 &amp; k\\0 &amp; 1\end{pmatrix}$$ and $$ R = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0\end{pmatrix}$$ be translation and reflection matrices, with inverses $T_k^{-1} = T_{-k}$ and $R^{-1}=R$. The center of $GL(2,\mathbb{Z}$) must surely commute with all $T_k$ (that is, for all $k$), and also with $R$. If we let $$ X = \begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$$ then $X$ being in the center implies \begin{align} T_kXT_{-k} &amp;= X\\ RXR &amp;= X \end{align} This is enough to imply $b=c=0$ and $a=d$ [to see this, just do the multiplcations and equate matrix coefficients]; that is, the center is $Z_1=\{I, -I\}$, with $I$ the $2\times2$ identity matrix.</p> <p>What about the second center, $Z_2$? These are elements that are central "mod $Z_1$", or elements $X$ such that <strong>for all</strong> $Y\in GL(2,\mathbb{Z})$, we have $$ YXY^{-1} = \pm X$$ [Note: the choice of $+$ or $-$ on the right hand side is dependent on $Y$.]</p> <p>Again, using $T_k$ and $R$, show that $X$ would still have to be $\pm I$. This means $Z_2=Z_1$, and thus our upper central series has stalled (and hasn't become the whole group); the group $GL(2,\mathbb{Z})$ thus is not nilpotent.</p> <p><strong>EDIT</strong>: Adding in more details.</p> <h2>First center</h2> <p>If $X$ is in the center of $GL(2,\mathbb{Z})$, then it commutes with all matrices. In particular, we have $$ T_kXT_{-k}=X$$ for all $T_k$. Expanding that out, we get four equations: \begin{align} a+ck &amp;= a\\ b+k(d-a-ck) &amp;= b\\ c &amp;= c\\ d-ck &amp;= d \end{align} Since this holds for any $k$, we get from the first equation $a+ck_1=a+ck_2$ for $k_1\neq k_2$, which immediately implies $c=0$. Knowing $c=0$, the second equation similarly implies $a=d$. To get $b=c$, we expand $RXR=X$: \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = \begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation}</p> <p>Thus, our center is $\{I, -I\}$ (we actually need to check these commute with all matrices, but that's easy).</p> <h2>Second center</h2> <p>The second center is defined as $Z_2(G)/Z_1(G)=Z(G/Z_1(G))$; that is, it consists of the matrices that commute with all others, "mod $Z_1$". This means that if $X\in Z_2$, and $Y\in GL(2,\mathbb{Z})$, then there exists $Z_Y\in Z_1$ such that $$ YXY^{-1} = Z_YX$$ The element from $Z_1$ will probably depend on $Y$; the point is that it always exists. (If you want to think about this more abstractly, the above is just saying that $[Z_2, G]\subset Z_1$).</p> <p>Let's find the second center. If $X\in Z_2$, then for every $k$, there exists $Z_k\in Z_1=\{I, -I\}$ such that $$ T_kXT_{-k} = Z_kX$$</p> <p>Since there are only two choices for $Z_k$ (namely, $I$ or $-I$), and since there are infinitely many $k$s, we can choose $k_1, k_2$ such that $Z_{k_1}=Z_{k_2}$, and thus just as above, we get (after possibly cancelling a negative sign if $Z_{k_1}=Z_{k_2}=-I$) $$ a + ck_1 = a + ck_2$$ which again implies $c=0$. Using $R$ we have \begin{equation} \begin{pmatrix} d &amp; c\\ b &amp; a \end{pmatrix} = Z_R\begin{pmatrix} a &amp; b\\c &amp; d\end{pmatrix} \end{equation} where $Z_R$ is $I$ or $-I$. Thus we get $b=\pm c=0$, and $a=\pm d$.</p> <p>That means the only thing left to show that $Z_2=Z_1=\{I, -I\}$ is to show that $a=-d$ cannot happen. That is, we need to show the matrix $$ \begin{pmatrix} -1 &amp; 0\\ 0 &amp; 1 \end{pmatrix} $$ is not in $Z_2$. I'll leave this as an exercise (hint: try checking what happens with $T_1$).</p> Answer 56: <p>Because I'm lazy, I'll write $G = \operatorname{GL}(2,\mathbb{Z})$.</p> <p>From your comments/messages in the chat, I gather that you found the center $Z(G) = \{I,-I\}$ where $I$ is the identity matrix.</p> <p>Now, consider the upper central series: $$1 = Z_0 \trianglelefteq Z_1 \trianglelefteq Z_2 \trianglelefteq \cdots \trianglelefteq Z_i \trianglelefteq \cdots$$ where $$Z_{i+1} = \{ x \in G \mid \forall y \in G: [x,y] \in Z_i\}= \{ x \in G \mid \forall y \in G: x^{-1}y^{-1}xy \in Z_i\}.$$ In particular, observe that $$Z_1 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_0\} = \{ x \in G \mid \forall y \in G: xy = yx \} = Z(G) = \{I,-I\}.$$ Now, onwards to $Z_2$. $$Z_2 = \{ x \in G \mid \forall y \in G: [x,y] \in Z_1\} = \{ x \in G \mid \forall y \in G: (xy = yx \text{ or } xy = -yx) \}.$$ Let us examine the second condition: $xy = -yx$. This means that $y = -x^{-1}yx$, which gives $$\operatorname{trace}(y) = \operatorname{trace}(-x^{-1}yx) = -\operatorname{trace}(x^{-1}yx) = -\operatorname{trace}(xx^{-1}y) = -\operatorname{trace}(y)$$ hence $\operatorname{trace}(y) = 0$. So $x$ must commute with all matrices $y$ having $\operatorname{trace}(y) \neq 0$. We claim that the only matrices $x$ for which this holds, are $I$ and $-I$ (see below for a proof of this claim).</p> <p>Thus we end up with $Z_2(G) = Z(G)$ again! So if we repeat these steps again, we'll find that $Z_3 = Z(G)$, and again that $Z_4 = Z(G)$, etc: $Z_i = Z(G) = \{I,-I\}$ for all $i \geq 1$. </p> <p>But then the upper central series never terminates; and hence $G$ is not nilpotent!</p> <p><strong>Proof of claim:</strong> Clearly it holds for $x = I$ or $x = -I$, since these groups are in the center of $G$. </p> <p>Now, let $x = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}$. Then we must have that $$\begin{pmatrix}x_1 &amp; x_1+x_2\\x_3 &amp; x_3 + x_4\end{pmatrix} = \begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix}\begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix} = \begin{pmatrix}1 &amp; 1\\ 0 &amp; 1\end{pmatrix}\begin{pmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{pmatrix} = \begin{pmatrix}x_1 +x_3&amp; x_2+x_4\\x_3 &amp; x_4\end{pmatrix},$$ hence $x_3 = 0$ and $x_1 = x_4$. Similarly, commuting $x$ with $\begin{pmatrix}1 &amp; 0\\ 1 &amp; 1\end{pmatrix}$ you find that $x_2 = 0$ and $x_1 = x_4$. But since $x \in G$, then the only matrices satisfying this are $I$ and $-I$.</p>
Find independent $X$ and $Y$ that are not normally distributed s.t. $X+Y$ is normally distributed.
2,090,019
7
https://math.stackexchange.com/questions/2090019/find-independent-x-and-y-that-are-not-normally-distributed-s-t-xy-is-nor
<p>It's a rather elementary fact that a sum of two independent normally distributed random variables $X$ and $Y$ is normally distributed (or, if you will, the convolution of two normal densities is a normal density). To what extent does this go the other way around? <a href="https://math.stackexchange.com/questions/2005725/y-is-normally-distributed-or-constant-if-x-and-xy-are-normally-distribute">It seems</a> that is $X$ and $Y$ are independent and $X$ is normally distributed, then $Y$ is normally distributed or constant.</p> <p>If you drop both independence of $X$ and $Y$ and $X$ being normally distributed, it's pretty easy to e.g. take $X \sim \mathcal{N}(0,1)$, $Y = 1(X \geq 0)$ and consider $U=XY$, $V=X \,1(Y = 0)$. Now neither $U$ or $V$ are normally distributed but $U+V = X$.</p> <p>However, if you require $X$ and $Y$ to be independent, but drop the requirement of $X$ being normally distributed, it's seems more difficult. Is there a counterexample in that case, where $X$ and $Y$ are not normally distributed but $X+Y$ is?</p>
[ { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" }, { "answer_id": 2150683, "score": 7, "text": "<p>According to a theorem conjectured by P. Lévy and \nproved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p>\n\n<blockquote>\n <p>If $X$ and $Y$ are independent random variables and\n $X+Y$ is normally distributed, then both $X$ and $Y$ \n are normally distributed.</p>\n</blockquote>\n\n<p>I assume that $Y$ being a constant (and hence independent of $X$)\nis considered as being accounted for in this theorem by\nconsidering $Y$ to be a degenerate normal random variable with\nvariance $0$.</p>\n" } ]
Question: Find independent $X$ and $Y$ that are not normally distributed s.t. $X+Y$ is normally distributed. Answer 1: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 2: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 3: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 4: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 5: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 6: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 7: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 8: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 9: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 10: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 11: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 12: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 13: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p> Answer 14: <p>According to a theorem conjectured by P. Lévy and proved by H. Cramér (see Feller, Chapter XV.8, Theorem 1), </p> <blockquote> <p>If $X$ and $Y$ are independent random variables and $X+Y$ is normally distributed, then both $X$ and $Y$ are normally distributed.</p> </blockquote> <p>I assume that $Y$ being a constant (and hence independent of $X$) is considered as being accounted for in this theorem by considering $Y$ to be a degenerate normal random variable with variance $0$.</p>
Strategy on factoring quartic polynomials: cross method.
2,090,587
7
https://math.stackexchange.com/questions/2090587/strategy-on-factoring-quartic-polynomials-cross-method
<p>I found a method to factorize <strong>quartic</strong> polynomials which I don't understand how it works. </p> <p>It is presented like this:</p> <blockquote> <p><strong>Cross method</strong> This methodology lets to factorize ordered and completed 4th polynomials of the form:</p> <p>$$F(x) = ax^4+bx^3+cx^2+dx+e$$</p> <p>Rules:</p> <ol> <li>Factor the extreme terms with cross method to get a squared term (generally different from the squared term of the original polynomial). </li> <li>Get $\Delta$ from the difference of the squared term of the polynomial and the term of the first step, and replace the result in the original polynomial. </li> <li>Then, verify the binary combinations as double cross factoring.</li> </ol> </blockquote> <p>I used for different exercises and it works, but I don't understand the basis for this method. </p> <p>Can anyone explain the reasons?</p> <h2>P.D.</h2> <h2>Example</h2> <p>$$x^4+2x^3+3x^2+2x-3$$</p> <ol> <li>Factoring the extremes terms; $x^4$ and $-3$:</li> </ol> <p>$$(x^2+3)(x^2-1)$$ The result of the cross method is $3x^2-x^2=2x^2$</p> <ol start="2"> <li>Calculating $\Delta$:</li> </ol> <p>$$\Delta=\text{(original squared term)}-\text{(step one squared term)}= (3x^2)-(2x^2)=x^2$$</p> <p>And replaced it in the original polynomial:</p> <p>$$x^4+2x^3+x^2+2x-3$$</p> <ol start="3"> <li>Using the cross method for second and fourth term:</li> </ol> <p>For second term: $(x^2+x)(x^2+x)$ is $2x^3$</p> <p>For fourth term: $(x+3)(x-1)$ is $2x$</p> <p>So, arranging the terms, the two factors are:</p> <p>$$(x^2+x+3)(x^2+x-1)$$</p>
[ { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" }, { "answer_id": 2093948, "score": 2, "text": "<p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p>\n\n<p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p>\n\n<p>Here's a less confusing way of giving the same instructions:</p>\n\n<ol>\n<li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li>\n<li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li>\n<li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li>\n<li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li>\n</ol>\n" }, { "answer_id": 2094026, "score": 2, "text": "<p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\\tag{1}$$ and comparing coefficients.</p>\n\n<ol>\n<li><em>Factoring extremes</em></li>\n</ol>\n\n<p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\\ d=3.$</p>\n\n<ol start=\"2\">\n<li><em>Calculating $\\Delta$</em></li>\n</ol>\n\n<p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\\Delta = 3x^2 - 2x^2$. </p>\n\n<ol start=\"3\">\n<li><em>Covering for the missing $\\Delta$</em></li>\n</ol>\n\n<p>There is only one place where $\\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \\Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\\Delta$. In this case $ac = 1$. There are two possibilities in $\\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p>\n\n<ol start=\"4\">\n<li><em>Collecting terms</em></li>\n</ol>\n\n<p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p>\n\n<hr>\n\n<p>Now, let us do the same thing, only transparently:</p>\n\n<p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p>\n\n<p>so we need to solve the following system in $\\Bbb Z$:</p>\n\n<p>\\begin{array}{c r}\\begin{align}\na+c&amp;=2\\\\\nb+d+ac&amp;=3\\\\\nad+bc&amp;=2\\\\\nbd &amp;= -3\n\\end{align} &amp;\\tag{2}\\end{array}</p>\n\n<p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p>\n\n<p>\\begin{align}\na+c&amp;=2\\\\\nac&amp;=1\\\\\n3a-c&amp;=2\n\\end{align}</p>\n\n<p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p>\n\n<hr>\n\n<p>Addendum:</p>\n\n<p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\\deg p = \\deg f + \\deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href=\"https://en.wikipedia.org/wiki/Rational_root_theorem\" rel=\"nofollow noreferrer\">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p>\n\n<p>Finally, if the system $(2)$ had no solutions in $\\Bbb Z$, we would conclude that $p$ is irreducible over $\\Bbb Z$.</p>\n" } ]
Question: Strategy on factoring quartic polynomials: cross method. Answer 1: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 2: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 3: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 4: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 5: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 6: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 7: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 8: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 9: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 10: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 11: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 12: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 13: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 14: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 15: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 16: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 17: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 18: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 19: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 20: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 21: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 22: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 23: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 24: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 25: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 26: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p> Answer 27: <p>Are you aware of how to factor quadratic equations? This is just a method for that, adapted to work for quartic equations. The reason that this works is that many (but not all) quartic equations factor into the product of two quadratic equations.</p> <p>To see that this doesn't always work, apply it to $$p(x)=x^4-3x^3+x^2-2x-3=(x-3)(x^3+x+1)$$ The second term doesn't factor any further.</p> <p>Here's a less confusing way of giving the same instructions:</p> <ol> <li>Assume the polynomial, $f(x)$, factors into $(x^2+ax+b)(x^2+cx+d)$</li> <li>$bd$ must give the constant term of the original polynomial, so find two values that multiply to give the constant term and write $g(x)=(x^2+b)(x^2+d)=x^4+(b+d)x^2+bd$. In the computational example, $g(x)=(x^2+3)(x^2-1)$</li> <li>Now let's look at $f(x)-g(x)$, which is $2x^3+x^2+2x$ in your example. This is a degree 3 polynomial with no constant term and represents the parts of the factors we need to still find. In our factorization, we know that the degree $1$ term is given by $(ad+bc)x$, and we have already found $b$ and $d$. Similarly, we know that the degree $3$ term is given by $(a+c)x^3$. This is a system of equations with two variables and two unknowns and so can be solved. In your example, these equations are $-a+3c=2$ and $a+c=2$ respectively.</li> <li>Solving gives the last two coefficients we need, and then putting them together with the original piece gives us the factorization. In your example, the solution $a=c=1$ works. This gives us $(x^2+x+3)(x^2+x-1)$ </li> </ol> Answer 28: <p>I might be misunderstanding, but this just looks like concealed expanding $$(x^2+ax+b)(x^2+cx+d)\tag{1}$$ and comparing coefficients.</p> <ol> <li><em>Factoring extremes</em></li> </ol> <p>Well, what we want is to choose $b$ and $d$ such that $(x^2+b)(x^2+d)$ matches leading and constant factor. What it amounts to is finding $b$ and $d$ such that $bd = -3$. Let's say we were lucky and chose the winning pair $b = -1,\ d=3.$</p> <ol start="2"> <li><em>Calculating $\Delta$</em></li> </ol> <p>Why is this important? Well, $(x^2-1)(x^2+3) = x^4+2x^2-3$, so we are missing one $x^2$: $\Delta = 3x^2 - 2x^2$. </p> <ol start="3"> <li><em>Covering for the missing $\Delta$</em></li> </ol> <p>There is only one place where $\Delta$ can come from, that is from $a$ and $c$ - we must have $ac x^2 = \Delta$. Why? Well, if you expand $(1)$, you will find that $x^2$ term must be equal to $b + d + ac$. We found $b+d$ in step 1 and 2, so $acx^2$ is equal to $\Delta$. In this case $ac = 1$. There are two possibilities in $\Bbb Z$, either $a=c=1$, or $a=c=-1$. It is easy to decide which one, since it must fit the other terms as well, namely $x^3$ term is equal to $a + c$ and the $x$ term is equal to $ad + bc = 3a - c$. Thus, $a = c = 1$.</p> <ol start="4"> <li><em>Collecting terms</em></li> </ol> <p>We simply write $(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3$ and feel the magic all around us.</p> <hr> <p>Now, let us do the same thing, only transparently:</p> <p>$$(x^2+ax+b)(x^2+cx+d) = x^4+(a+c)x^3+(b+d+ac)x^2+(ad+bc)x+bd$$</p> <p>so we need to solve the following system in $\Bbb Z$:</p> <p>\begin{array}{c r}\begin{align} a+c&amp;=2\\ b+d+ac&amp;=3\\ ad+bc&amp;=2\\ bd &amp;= -3 \end{align} &amp;\tag{2}\end{array}</p> <p>There are two possibilities, either $b = 1$ and $d = -3$, or $b = -1$ and $d = 3$. We can see that former leads to no solution, so we will pursuit the latter, the system becomes:</p> <p>\begin{align} a+c&amp;=2\\ ac&amp;=1\\ 3a-c&amp;=2 \end{align}</p> <p>The first and the third equation form linear system, which has unique solution $a = c = 1$, luckily, consistent with the second equation. Thus, we now know for certain that $$(x^2+x-1)(x^2+x+3) = x^4+2x^3+3x^2+2x-3.$$</p> <hr> <p>Addendum:</p> <p>There is a legitimate question why we would consider $(1)$ in the first case. Now, we are trying to factor a quartic over $\Bbb Z$. Let's say $p(x) = x^4 +2x^3+3x^2+2x-3$ and assume that $p = fg$. We have that $\deg p = \deg f + \deg g$, so there are two cases: $4=1+3$ or $4=2+2$. The first case would imply that $p$ has integer root and we can use <a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow noreferrer">rational root theorem</a> to rule that out. Thus, only case $4=2+2$ remains, which leads to considering $(1)$.</p> <p>Finally, if the system $(2)$ had no solutions in $\Bbb Z$, we would conclude that $p$ is irreducible over $\Bbb Z$.</p>
limit of $(e^x-1)^{\frac{1}{x}}$ - Without L&#39;hopital
2,090,606
7
https://math.stackexchange.com/questions/2090606/limit-of-ex-1-frac1x-without-lhopital
<p>I am having trouble calculating the following limit:</p> <p>$$\lim_{x\to\infty}(e^x-1)^{\frac{1}{x}}$$</p> <p>I figured out (With help from wolfram) that the limit is $e$, but I can't understand why.</p> <p>I tried to use $x^a = e^{a\ln(x)}$ but that didn't help because the base is $e^x-1$</p> <p>Any hint or explanation is welcome! <Br>PS: I can't use L'hopital (tagged so specifically)</p>
[ { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" }, { "answer_id": 2090680, "score": 8, "text": "<p>Note that we can write</p>\n\n<p>$$\\left(e^x-1\\right)^{1/x}=e\\left(1-e^{-x}\\right)^{1/x}$$</p>\n\n<p>The limit, $\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p>\n\n<p>Therefore, we have</p>\n\n<p>$$\\begin{align}\n\\lim_{x\\to \\infty}\\left(e^x-1\\right)^{1/x}&amp;=\\lim_{x\\to \\infty}\\left(e\\left(1-e^{-x}\\right)^{1/x}\\right)\\\\\\\\\n&amp;=e\\lim_{x\\to \\infty}\\left(1-e^{-x}\\right)^{1/x}\\\\\\\\\n&amp;=e\n\\end{align}$$ </p>\n" }, { "answer_id": 2090783, "score": 3, "text": "<p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p>\n\n<p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\\,\\text { for } x&gt;1.$$</p>\n\n<p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \\to 1,$ the desired limit is $e$ by the squeeze theorem.</p>\n" }, { "answer_id": 2090615, "score": 2, "text": "<p>If \n$$\nL=\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\Rightarrow \\log L=\\log\\lim_{x\\rightarrow \\infty}(e^x-1)^{1/x}\\stackrel{\\text{continuity of the logarithm}}{\\Rightarrow} \\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\stackrel{L'Hôpital's rule}=\\lim_{x\\rightarrow \\infty}\\frac{e^x}{e^x}=1\\Rightarrow\\log L=1\\Rightarrow e=L\n$$\nTo remove L'Hôpital's use:\n$$\n\\lim_{x\\rightarrow \\infty}1/x\\log(e^x-1)\\sim \\lim_{x\\rightarrow \\infty}1/x\\log(e^x)=\\lim_{x\\rightarrow \\infty}x/x=1\n$$</p>\n" }, { "answer_id": 2090612, "score": 1, "text": "<p>We have: $\\ln \\left(\\left(e^x-1\\right)^{1/x}\\right) = \\dfrac{\\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p>\n" }, { "answer_id": 2090626, "score": 1, "text": "<p>$$\n\\lim _{x\\to \\infty }\\left(\\left(e^x-1\\right)^{\\frac{1}{x}}\\right)=\\lim_{x\\rightarrow \\infty}e^{\\frac{1}{x}\\ln(e^x-1)}\\approx \\lim_{x\\rightarrow \\infty}\\frac{1}{x}\\ln(e^x)=\\lim_{x\\rightarrow \\infty}\\frac{x}{x}=1 \\rightarrow \\color{red}{e^1}\n$$</p>\n" } ]
Question: limit of $(e^x-1)^{\frac{1}{x}}$ - Without L&#39;hopital Answer 1: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 2: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 3: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 4: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 5: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 6: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 7: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 8: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 9: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 10: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 11: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 12: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 13: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 14: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 15: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 16: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 17: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 18: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 19: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 20: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 21: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 22: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 23: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 24: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 25: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 26: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 27: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 28: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 29: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 30: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 31: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 32: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 33: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 34: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 35: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 36: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 37: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 38: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 39: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 40: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 41: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 42: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 43: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 44: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 45: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 46: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 47: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 48: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 49: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 50: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 51: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 52: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 53: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 54: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 55: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 56: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 57: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 58: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 59: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 60: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 61: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 62: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 63: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 64: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 65: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p> Answer 66: <p>Note that we can write</p> <p>$$\left(e^x-1\right)^{1/x}=e\left(1-e^{-x}\right)^{1/x}$$</p> <p>The limit, $\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}$, is not of indeterminate form since $1^0=1$.</p> <p>Therefore, we have</p> <p>$$\begin{align} \lim_{x\to \infty}\left(e^x-1\right)^{1/x}&amp;=\lim_{x\to \infty}\left(e\left(1-e^{-x}\right)^{1/x}\right)\\\\ &amp;=e\lim_{x\to \infty}\left(1-e^{-x}\right)^{1/x}\\\\ &amp;=e \end{align}$$ </p> Answer 67: <p>Check that $(1/2)e^x &lt; e^x-1$ for $x&gt;1.$ Thus</p> <p>$$((1/2)e^x)^{1/x} &lt; (e^x-1)^{1/x}&lt; (e^x)^{1/x}\,\text { for } x&gt;1.$$</p> <p>The term on the left equals $(1/2)^{1/x}e,$ the term on the right equals $e.$ Since $(1/2)^{1/x} \to 1,$ the desired limit is $e$ by the squeeze theorem.</p> Answer 68: <p>If $$ L=\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\Rightarrow \log L=\log\lim_{x\rightarrow \infty}(e^x-1)^{1/x}\stackrel{\text{continuity of the logarithm}}{\Rightarrow} \lim_{x\rightarrow \infty}1/x\log(e^x-1)\stackrel{L'Hôpital's rule}=\lim_{x\rightarrow \infty}\frac{e^x}{e^x}=1\Rightarrow\log L=1\Rightarrow e=L $$ To remove L'Hôpital's use: $$ \lim_{x\rightarrow \infty}1/x\log(e^x-1)\sim \lim_{x\rightarrow \infty}1/x\log(e^x)=\lim_{x\rightarrow \infty}x/x=1 $$</p> Answer 69: <p>We have: $\ln \left(\left(e^x-1\right)^{1/x}\right) = \dfrac{\ln(e^x-1)}{x}$. You can use L'hospital rule here..and your answer would be $e^L$ whereas $L$ is the limit of the L'hospital.</p> Answer 70: <p>$$ \lim _{x\to \infty }\left(\left(e^x-1\right)^{\frac{1}{x}}\right)=\lim_{x\rightarrow \infty}e^{\frac{1}{x}\ln(e^x-1)}\approx \lim_{x\rightarrow \infty}\frac{1}{x}\ln(e^x)=\lim_{x\rightarrow \infty}\frac{x}{x}=1 \rightarrow \color{red}{e^1} $$</p>
On the wedge product of forms
2,091,004
7
https://math.stackexchange.com/questions/2091004/on-the-wedge-product-of-forms
<p>Pages 36-37 of Loring Tu's <em>Introduction to Manifolds</em> says:</p> <blockquote> <p>The <em>wedge product</em> of a $k$-form $\omega$ and an $l$-form $\tau$ on an open set $U$ is defined pointwise: $$(\omega\wedge \tau)_p=\omega_p\wedge \tau_p, \hspace{.75cm} p\in U. $$ In terms of coordinates, if $\omega=\sum_I a_Idx^I$ and $\tau=\sum_J b_J dx^J$, then $$\omega \wedge \tau=\sum_{I,J}(a_Ib_J)dx^I\wedge dx^J\tag{$\ast$}.$$ In this sum, if $I$ and $J$ are not disjoint on the right hand side, then $dx^I\wedge dx^J=0.$ Hence, the sum is actually over disjoint multi-indices: $$ \omega\wedge\tau=\sum_{I,J\text{ disjoint}}(a_Ib_J)dx^I\wedge dx^J, $$ which shows that the wedge product of two $C^\infty$ forms is $C^\infty$.</p> </blockquote> <p>Here, he uses the capitol letters $I$ and $J$ to denote strictly increasing sets of indices $I=(i_1&lt;\dots&lt;i_k)$ and $J=(j_1&lt;\dots&lt;j_l)$ of lengths $k$ and $l$, respectively, from the set of indices $\{1,\dots,n\}$. So, for example $dx^I=dx^{i_1}\wedge\dots\wedge dx^{i_k}$ where $i_1,\dots,i_k\in \{1,\dots,n\}$. </p> <p>My question is on the equation in $(\ast)$. Essentially, why does it make sense that the pointwise wedge product of $\omega$ and $\tau$ yields the equation in $(\ast)$? I understand the last half of the text above, and I am even able to use the definition of $\omega\wedge \tau$ given in $(\ast)$ to compute a specific example given in the exercises. But how does one arrive at this equation by defining the wedge product of $\omega$ and $\tau$ pointwise?</p>
[ { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" }, { "answer_id": 2091109, "score": 5, "text": "<p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p>\n\n<p>First, by the anticommutativity of $\\wedge$, which says that for a $k$-form $f$ and an $\\ell$-form $g$\n $$f\\wedge g=(-1)^{k\\ell}g\\wedge f.$$ \nSo, for a 0-form $f$ and an $\\ell$-form $g$, we get $f\\wedge g=(-1)^{0\\cdot \\ell} g\\wedge f=g\\wedge f$. </p>\n\n<p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\\ell$-form $\\omega$ is actually regular multiplication; that is $f\\wedge \\omega=f\\omega$. So, at a point $p$ we have $(f\\wedge\\omega)_p=f(p)\\omega_p$. </p>\n\n<p>Therefore, if we assume for a moment that $\\omega=a_Idx^I$ and $\\tau=b_J dx^J$, then we can view this as $\\omega=a_I\\wedge dx^I$ and $\\tau=b_J\\wedge dx^J$. Then\n$$\n\\omega\\wedge \\tau=a_I\\wedge dx^I\\wedge b_J\\wedge dx^J=a_I\\wedge b_J\\wedge dx^I\\wedge dx^J=(a_Ib_J)\\wedge dx^I\\wedge dx^J=(a_Ib_J) dx^I\\wedge dx^J.\n$$\nHere's an example:\nOn Problem 4.3 at the end of the section, we are asked to compute $dx\\wedge dy$ where $x=r\\cos\\theta$ and $y=r\\sin\\theta$. We get:\n$$\ndx=\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\n\\\\\ndy=\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta,\n$$\nand so\n$$\ndx=\\cos\\theta dr-r\\sin\\theta d\\theta,\n\\\\\ndy=sin\\theta dr+r\\cos\\theta d\\theta.\n$$\nRemember that there is a $\\wedge$ between a 0-form and a 1-form; i.e., $\\frac{\\partial x}{\\partial r}dr=\\frac{\\partial x}{\\partial r}\\wedge dr$. Now,\n\\begin{align*}\ndx\\wedge dy&amp;= \\left(\\frac{\\partial x}{\\partial r}dr+\\frac{\\partial x}{\\partial \\theta}d\\theta\\right)\\wedge\\left(\\frac{\\partial y}{\\partial r}dr+\\frac{\\partial y}{\\partial \\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r}dr\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}d\\theta\\wedge\\frac{\\partial y}{\\partial r}dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta}d\\theta\\wedge \\frac{\\partial y}{\\partial\\theta}d\\theta\\right)\\\\\n&amp;= \\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial r}dr\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)\n+\\left(\\frac{\\partial x}{\\partial \\theta} \\frac{\\partial y}{\\partial\\theta}d\\theta\\wedge d\\theta\\right)\\\\\n&amp;=0+\\left(\\frac{\\partial x}{\\partial r} \\frac{\\partial y}{\\partial\\theta}dr\\wedge d\\theta\\right)\n+\\left(\\frac{\\partial x}{\\partial\\theta}\\frac{\\partial y}{\\partial r}d\\theta\\wedge dr\\right)+0\\\\\n&amp;=r dr\\wedge d\\theta.\n\\end{align*}\nThe first equal sign is by definition; the second is because $\\wedge$ is distributive over addition; the third is by the anticommutativity of $\\wedge$, where, for example we have $dr\\wedge \\frac{\\partial y}{\\partial r}=\\frac{\\partial y}{\\partial r}\\wedge dr=\\frac{\\partial y}{\\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that\n$dr\\wedge dr=0=d\\theta\\wedge d\\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>\n" } ]
Question: On the wedge product of forms Answer 1: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 2: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 3: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 4: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 5: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 6: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 7: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 8: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 9: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 10: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 11: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 12: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 13: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p> Answer 14: <p>I suppose I will answer my own question since I think I've figured it out. It comes down to the fact that the functions $a_I$ and $b_J$ are technically 0-forms. Then we use algebraic properties of the wedge product and look at what those algebraic properties tell us when a 0-form is involved. </p> <p>First, by the anticommutativity of $\wedge$, which says that for a $k$-form $f$ and an $\ell$-form $g$ $$f\wedge g=(-1)^{k\ell}g\wedge f.$$ So, for a 0-form $f$ and an $\ell$-form $g$, we get $f\wedge g=(-1)^{0\cdot \ell} g\wedge f=g\wedge f$. </p> <p>Second, Tu remarks on page 37 that the wedge product of a 0-form $f$ and an $\ell$-form $\omega$ is actually regular multiplication; that is $f\wedge \omega=f\omega$. So, at a point $p$ we have $(f\wedge\omega)_p=f(p)\omega_p$. </p> <p>Therefore, if we assume for a moment that $\omega=a_Idx^I$ and $\tau=b_J dx^J$, then we can view this as $\omega=a_I\wedge dx^I$ and $\tau=b_J\wedge dx^J$. Then $$ \omega\wedge \tau=a_I\wedge dx^I\wedge b_J\wedge dx^J=a_I\wedge b_J\wedge dx^I\wedge dx^J=(a_Ib_J)\wedge dx^I\wedge dx^J=(a_Ib_J) dx^I\wedge dx^J. $$ Here's an example: On Problem 4.3 at the end of the section, we are asked to compute $dx\wedge dy$ where $x=r\cos\theta$ and $y=r\sin\theta$. We get: $$ dx=\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta \\ dy=\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta, $$ and so $$ dx=\cos\theta dr-r\sin\theta d\theta, \\ dy=sin\theta dr+r\cos\theta d\theta. $$ Remember that there is a $\wedge$ between a 0-form and a 1-form; i.e., $\frac{\partial x}{\partial r}dr=\frac{\partial x}{\partial r}\wedge dr$. Now, \begin{align*} dx\wedge dy&amp;= \left(\frac{\partial x}{\partial r}dr+\frac{\partial x}{\partial \theta}d\theta\right)\wedge\left(\frac{\partial y}{\partial r}dr+\frac{\partial y}{\partial \theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial r}dr\wedge \frac{\partial y}{\partial\theta}d\theta\right) +\left(\frac{\partial x}{\partial\theta}d\theta\wedge\frac{\partial y}{\partial r}dr\right) +\left(\frac{\partial x}{\partial \theta}d\theta\wedge \frac{\partial y}{\partial\theta}d\theta\right)\\ &amp;= \left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial r}dr\wedge dr\right) +\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right) +\left(\frac{\partial x}{\partial \theta} \frac{\partial y}{\partial\theta}d\theta\wedge d\theta\right)\\ &amp;=0+\left(\frac{\partial x}{\partial r} \frac{\partial y}{\partial\theta}dr\wedge d\theta\right) +\left(\frac{\partial x}{\partial\theta}\frac{\partial y}{\partial r}d\theta\wedge dr\right)+0\\ &amp;=r dr\wedge d\theta. \end{align*} The first equal sign is by definition; the second is because $\wedge$ is distributive over addition; the third is by the anticommutativity of $\wedge$, where, for example we have $dr\wedge \frac{\partial y}{\partial r}=\frac{\partial y}{\partial r}\wedge dr=\frac{\partial y}{\partial r}dr$ in the first parenthesis; the fourth equal sign is by the fact that $dr\wedge dr=0=d\theta\wedge d\theta$; the fifth equal sign follows after plugging in the partials and simplifying. </p>
Is $\sum\limits_{k=0}^{n} {{n}\choose{k}}\frac{(-1)^{k}}{2k+2}$ equal to $\frac{1}{2n+2}$?
2,091,308
7
https://math.stackexchange.com/questions/2091308/is-sum-limits-k-0n-n-choosek-frac-1k2k2-equal-to-frac
<p>On p. 146 of <em><a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;ved=0ahUKEwjgyOyv17bRAhVMSyYKHbtdB2IQFggcMAA&amp;url=https%3A%2F%2Fnotendur.hi.is%2Fvae11%2F%25C3%259Eekking%2Fprinciples_of_mathematical_analysis_walter_rudin.pdf&amp;usg=AFQjCNECF1k5HDQP-wpHD5DgbEGJC54fOQ">Principles of Mathematical Analysis</a></em>, Rudin claims that $\int_{0}^{1} x(1-x^2)^ndx=\frac{1}{2n+2}$. But evaluating the integral I found it equal to $$\int_{0}^{1}x(\sum\limits_{k=0}^{n}{{n}\choose{k}}(-x^2)^k)dx=\int_{0}^{1}\sum\limits_{k=0}^{n}{{n}\choose{k}}(-1)^{k}x^{2k+1}dx=\sum\limits_{k=0}^{n}{{n}\choose{k}}(-1)^{k}\frac{x^{2k+2}}{2k+2}$$ evaluated at $x=1$, which equals $\sum\limits_{k=0}^{n} {{n}\choose{k}}\frac{(-1)^{k}}{2k+2}$. So now I'm just wondering, why is this equal to $\frac{1}{2n+2}$?</p>
[ { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" }, { "answer_id": 2091334, "score": 6, "text": "<p>$$\\begin{align*}\n\\sum_{k=0}^n\\binom{n}k\\frac{(-1)^k}{2k+2}&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{k+1}\\binom{n}k(-1)^k\\\\\n&amp;=\\frac{1}2\\sum_{k=0}^n\\frac{1}{n+1}\\binom{n+1}{k+1}(-1)^k\\\\\n&amp;=\\frac{1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^{k-1}\\\\\n&amp;=\\frac{-1}{2n+2}\\sum_{k=1}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}\\\\\n&amp;=\\frac{-1}{2n+2}\\left(\\sum_{k=0}^{n+1}\\binom{n+1}k(-1)^k1^{n+1-k}-\\binom{n+1}0(-1)^01^{n+1}\\right)\\\\\n&amp;=\\frac{-1}{2n+2}\\left((-1+1)^{n+1}-1\\right)\\\\\n&amp;=\\frac{1}{2n+2}\n\\end{align*}$$</p>\n\n<p>by the binomial theorem.</p>\n" }, { "answer_id": 2091336, "score": 6, "text": "<p>HINT:</p>\n\n<p>Note that we can write</p>\n\n<p>$$\\frac{1}{2k+2}\\,\\binom{n}{k}=\\frac{1}{2n+2}\\binom{n+1}{k+1}$$</p>\n\n<p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\\sum\\limits_{k=0}^{n+1} \\binom{n}{k}(-1)^k=0$.</p>\n" }, { "answer_id": 2091624, "score": 3, "text": "<p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\\frac{1}{2}\\int_{0}^{1}t^{n}\\,dt=\\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p>\n" }, { "answer_id": 2091896, "score": 1, "text": "<p>An overkill. <a href=\"https://arxiv.org/abs/1603.07006\" rel=\"nofollow noreferrer\">The Melzak's identity</a> tell us that $$f\\left(x+y\\right)=y\\dbinom{y+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{f\\left(x-k\\right)}{y+k}\n $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\\in\\mathbb{R},\\, y\\neq-1,\\dots-n\n . So taking $f\\equiv1\n $ and $y=1$ we get $$1=\\dbinom{1+n}{n}\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}\n $$ hence $$\\sum_{k=0}^{n}\\dbinom{n}{k}\\left(-1\\right)^{k}\\frac{1}{1+k}=\\color{red}{\\frac{1}{n+1}}\\tag{1}\n $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>\n" } ]
Question: Is $\sum\limits_{k=0}^{n} {{n}\choose{k}}\frac{(-1)^{k}}{2k+2}$ equal to $\frac{1}{2n+2}$? Answer 1: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 2: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 3: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 4: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 5: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 6: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 7: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 8: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 9: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 10: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 11: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 12: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 13: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 14: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 15: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 16: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 17: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 18: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 19: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 20: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 21: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 22: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 23: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 24: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 25: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 26: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 27: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 28: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 29: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 30: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 31: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 32: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 33: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 34: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 35: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 36: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 37: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 38: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 39: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 40: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 41: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 42: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 43: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 44: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 45: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 46: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 47: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 48: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 49: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 50: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 51: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 52: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p> Answer 53: <p>$$\begin{align*} \sum_{k=0}^n\binom{n}k\frac{(-1)^k}{2k+2}&amp;=\frac{1}2\sum_{k=0}^n\frac{1}{k+1}\binom{n}k(-1)^k\\ &amp;=\frac{1}2\sum_{k=0}^n\frac{1}{n+1}\binom{n+1}{k+1}(-1)^k\\ &amp;=\frac{1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^{k-1}\\ &amp;=\frac{-1}{2n+2}\sum_{k=1}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}\\ &amp;=\frac{-1}{2n+2}\left(\sum_{k=0}^{n+1}\binom{n+1}k(-1)^k1^{n+1-k}-\binom{n+1}0(-1)^01^{n+1}\right)\\ &amp;=\frac{-1}{2n+2}\left((-1+1)^{n+1}-1\right)\\ &amp;=\frac{1}{2n+2} \end{align*}$$</p> <p>by the binomial theorem.</p> Answer 54: <p>HINT:</p> <p>Note that we can write</p> <p>$$\frac{1}{2k+2}\,\binom{n}{k}=\frac{1}{2n+2}\binom{n+1}{k+1}$$</p> <p>Then, substitute this into the summation, shift the index from $k$ to $k'=k+1$ so the new summation limits are $k'=1$ to $k'=n+1$, and use $\sum\limits_{k=0}^{n+1} \binom{n}{k}(-1)^k=0$.</p> Answer 55: <p>Why not use the substitution $1-x^{2}=t$ to change the integral to $$\frac{1}{2}\int_{0}^{1}t^{n}\,dt=\frac{1}{2n+2}$$ But you seem more interested in the link between your sum and value of integral in a direct manner as given in answers by Brian Scott and Dr. MV. </p> Answer 56: <p>An overkill. <a href="https://arxiv.org/abs/1603.07006" rel="nofollow noreferrer">The Melzak's identity</a> tell us that $$f\left(x+y\right)=y\dbinom{y+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{f\left(x-k\right)}{y+k} $$ where $f$ is an algebraic polynomial up to dergee $n$, x,y\in\mathbb{R},\, y\neq-1,\dots-n . So taking $f\equiv1 $ and $y=1$ we get $$1=\dbinom{1+n}{n}\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k} $$ hence $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{1}{1+k}=\color{red}{\frac{1}{n+1}}\tag{1} $$ now you have only to multiply both side of $(1)$ by $1/2$.</p>
How do I calculate a weighted average from two averages?
2,091,521
7
https://math.stackexchange.com/questions/2091521/how-do-i-calculate-a-weighted-average-from-two-averages
<p>I have two sets of averaged data I am working with, which account for a score and the average amount of users that achieved this.</p> <p>For example:</p> <p>Average Score $4$, Total Number of participants (which the average is derived from): $835$</p> <p>Average Score $3.5$, Total Number of participants: $4,579$</p> <p>Can I calculate a weighted mean from these two averages and participant counts, or would that be inaccurate? </p>
[ { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" }, { "answer_id": 2091526, "score": 13, "text": "<p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\\frac{835}{835+4579} \\cdot 4 + \\frac{4579}{835+4579}\\cdot 3.5$$</p>\n\n<p>so the answer is yes.</p>\n\n<hr>\n\n<p>In general, if you have $n$ numbers $\\{a_1,\\dots, a_n\\}$ with an average of $a$, and $m$ numbers $\\{b_1,\\dots, b_m\\}$ with an average of $b$, you know that</p>\n\n<p>$$\\frac{a_1+\\cdots + a_n}{n}=a\\\\\n\\frac{b_1+\\cdots + b_m}{m}=b$$</p>\n\n<p>or, in other words, $$a_1+\\cdots + a_n = n\\cdot a\\\\b_1+\\cdots + b_m = m\\cdot b$$</p>\n\n<p>and you are looking for the number $\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n}$</p>\n\n<p>Now this simplifies to \n$$\\frac{a_1+\\cdots + a_n+ b_1 + \\cdots + b_m}{m+n} = \\frac{n\\cdot a + m\\cdot b}{m+n}=\\frac{n}{m+n}a + \\frac{m}{m+n} b$$</p>\n\n<p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p>\n" }, { "answer_id": 2091531, "score": 2, "text": "<p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean:\n$$\nm= m1\\cdot \\frac{N_1}{N_1+N_2}+m2\\cdot \\frac{N_2}{N_1+N_2}.\n$$</p>\n" }, { "answer_id": 2091527, "score": 1, "text": "<p>$$\\frac{4\\cdot835+3.5\\cdot4579}{835+4579}$$</p>\n" } ]
Question: How do I calculate a weighted average from two averages? Answer 1: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 2: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 3: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 4: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 5: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 6: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 7: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 8: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 9: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 10: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 11: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 12: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 13: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 14: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 15: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 16: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 17: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 18: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 19: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 20: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 21: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 22: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 23: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 24: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 25: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 26: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 27: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 28: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 29: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 30: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 31: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 32: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 33: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 34: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 35: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 36: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 37: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 38: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 39: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p> Answer 40: <p>I assume you want to calculate the average score of all participants, which is (given your data) equal to $$\frac{835}{835+4579} \cdot 4 + \frac{4579}{835+4579}\cdot 3.5$$</p> <p>so the answer is yes.</p> <hr> <p>In general, if you have $n$ numbers $\{a_1,\dots, a_n\}$ with an average of $a$, and $m$ numbers $\{b_1,\dots, b_m\}$ with an average of $b$, you know that</p> <p>$$\frac{a_1+\cdots + a_n}{n}=a\\ \frac{b_1+\cdots + b_m}{m}=b$$</p> <p>or, in other words, $$a_1+\cdots + a_n = n\cdot a\\b_1+\cdots + b_m = m\cdot b$$</p> <p>and you are looking for the number $\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n}$</p> <p>Now this simplifies to $$\frac{a_1+\cdots + a_n+ b_1 + \cdots + b_m}{m+n} = \frac{n\cdot a + m\cdot b}{m+n}=\frac{n}{m+n}a + \frac{m}{m+n} b$$</p> <p>which is a weighted average of $a$ (the first average) and $b$ (the second average). The weights are determined by the size of each group.</p> Answer 41: <p>Assume that group 1 has an average of $m_1$ and size $N_1$ and group 2 has an average of $m_2$ and size $N_2$ then you can take the weighted average of the two means two get the overall mean: $$ m= m1\cdot \frac{N_1}{N_1+N_2}+m2\cdot \frac{N_2}{N_1+N_2}. $$</p> Answer 42: <p>$$\frac{4\cdot835+3.5\cdot4579}{835+4579}$$</p>
Taylor expansion using stirling&#39;s approximation
2,091,583
7
https://math.stackexchange.com/questions/2091583/taylor-expansion-using-stirlings-approximation
<p>This is from a statistical physics problem, but it is the mathematics behind it that I am stuck on here:</p> <blockquote> <p>Consider a large number $N$ of distinguishable particles distributed among $M$ boxes.</p> <p>We know that the total number of possible microstates is $$\Omega=M^N$$ and that the number of microstates with a distribution among the boxes given by the configuration $[n_1, n_2, ..., n_M]$ is given by $$\frac{N!}{\prod_{j=1}^M (n_j)!}\tag{1}$$ In the most likely configuration there are $$n_0 = \frac{N}{M}$$ particles in each box. Let $\Omega_0$ denote the statistical weight of this configuration and $p_0$ its probability. Now consider moving $\delta n$ particles from box $1$ to box $2$, giving the new configuration $$[n_0 − δn, n_0 + δn, n_0, n_0, ..., n_0]$$ Show that $$\fbox{$\color{blue}{\delta\left(\ln\Omega_{\{n\}}\right)\approx -\delta n\ln (n_0)-\frac{(\delta n)^2}{2 n_0}}$}$$ (Hint: you should use Stirling’s approximation here)</p> </blockquote> <p>The image below shows the situation: <a href="https://i.stack.imgur.com/ZaAs5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaAs5.png" alt="Two boxes"></a></p> <p>Here is the answer as written by the author: <a href="https://i.stack.imgur.com/6BHbu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6BHbu.png" alt="Two boxes"></a></p> <p>This might be a bit hard to read so I have typed it out <strong>word for word</strong> below:</p> <blockquote> <p>$$\ln\Omega_{\{n\}}=\ln(N!)-\sum_j\ln (n_j!)$$ The change in $\ln\Omega$ due to a change $\delta n$ in ONE box is $$\delta(\ln\Omega_{\{n\}})=-\ln([n_0+\delta_n]!)+\ln(n_0!)$$ Taylor expand $\ln (n!)$ $\color{red}{\text{(To second 2nd order since we are already at a maximum for}}$ $\color{red}{\ln\Omega_{\{n\}}}$$\color{red}{)}$ using Stirling: $$\ln (n!)\approx \underbrace{n_0\ln (n_0)-n_0}_{0th}+\underbrace{\ln (n_0)\cdot\delta_n}_{1st}+\underbrace{\frac{(\delta n)^2}{2n_0}}_{2nd}+\cdots$$ $$\implies \delta(\ln\Omega_{\{n\}})=-\ln (n_0)\cdot\delta n-\frac{(\delta n)^2}{2n_0}$$</p> </blockquote> <p>The red bracket can be ignored as it was just used in the previous part of the question where we had to show that $$n_0=\frac{N}{M}$$ is the most likely configuration which we did my differentiating and setting to zero to maximize.</p> <hr> <p>Finally I get to my question:</p> <p>I understand everything up to the point where it says "Taylor expand $\ln n!$", firstly where on earth is $n$ even defined? I see $n_0$ and $N$ but <strong>not</strong> $n$. </p> <p>I can only assume that the author meant 'Taylor expand $\ln([n_0+\delta_n]!)$' since I fail to see how that expression could ever get the factors $n_0$ and $\delta_n$.</p> <p>I know that the general Taylor expansion formula is given by $$f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots$$</p> <p>and that Stirlings approximation is as follows $$\ln(k!)\approx k\ln k - k +\frac12\ln k$$</p> <p>I have used both these formulae, but not <em>both</em> together.</p> <p>I'm very confused about how to proceed with this, so I naively apply Stirlings approximation first:</p> <p>$$\ln([n+\delta n]!)\approx (n+\delta n)\ln(n+\delta n)-(n+\delta n)+\frac12\ln(n+\delta n)$$</p> <p>But now I am completely stuck. Could someone please explain how the author was able to reach the result?</p> <hr> <h2>EDIT:</h2> <p>A comment below raised an important point. So I will just mention that for the purposes of this question the statistical weight $\Omega$ is equal to the <strong>number of microstates</strong>.</p> <hr> <h2>EDIT#2:</h2> <p>I must apologize there was a contradiction in my question due to a typo. </p> <p>What I needed to show was that $$\fbox{$\color{blue}{\delta\left(\ln\Omega_{\{n\}}\right)\approx -\delta n\ln (n_0)-\frac{(\delta n^2)}{2 n_0}}$}$$ and this is given in blue in the first quotation box, very sorry about this.</p> <hr> <h2>EDIT#3:</h2> <p>I acknowledge that those of you that answered actually figured out that there must be a similar expression when considering both boxes. So as promised here is part 2 of the question as written by the author: <a href="https://i.stack.imgur.com/Klrm1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Klrm1.png" alt="part 2 of the question"></a></p> <p>I'll type it <strong>word for word</strong> again anyway just in case it's hard to read:</p> <blockquote> <p>For a change to TWO boxes, moving $\delta n$ from $1$ to $2$, $$\delta\left(\ln\Omega_{\{n\}}\right)=-\ln (n_0)\cdot(-\delta n)-\frac{\left(-\delta n\right)^2}{2n_0}$$ $$\qquad\qquad\qquad\qquad\qquad-\ln (n_0)\cdot(+\delta n)-\frac{\left(+\delta n\right)^2}{2n_0}=-\frac{\delta n^2}{n_0}$$ $$\color{red}{\Huge{\star}}\quad\ln\Omega_{\{n\}}=\ln\Omega_0-\frac{\delta n^2}{n_0}$$</p> </blockquote> <p>Many thanks.</p>
[ { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" }, { "answer_id": 2097974, "score": 4, "text": "<p>Let us define the following two configurations:</p>\n\n<ul>\n<li>An \"optimal\" configuration: $\\mathbf n_1 := [n_0, \\cdots, n_0]$ .</li>\n<li>The perturbed configuration: $\\mathbf n_2 := [n_0 - \\delta n, n_0 + \\delta n, n_0, \\cdots, n_0]$ .</li>\n</ul>\n\n<p>Statistical weights of the two configurations are:\n$$ \\Omega_0 := \\Omega_{\\{\\mathbf n_1\\}} = \\frac{N!}{(n_0!)^M}\\,, \\qquad \\Omega_{\\{\\mathbf n_2\\}} = \\frac{N!}{(n_0-\\delta n)!(n_0 + \\delta n)! (n_0!)^{M-2}} \\,. $$\nTaking log of these two weights we get:\n\\begin{align} \\ln \\Omega_0 =&amp;\\; \\ln N! - M \\ln n_0!\\,, \\\\\n\\qquad \\ln \\Omega_{\\{\\mathbf n_2\\}} =&amp;\\; \\ln N! - \\ln (n_0-\\delta n)! - \\ln(n_0+\\delta n)! - (M-2) \\ln n_0!\\,. \\end{align}\nLog of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is:\n$$\\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 = - \\ln(n_0-\\delta n)! - \\ln(n_0 + \\delta n)! + 2\\ln n_0!\\,. \\tag{1}$$</p>\n\n<p>Assuming $n_0$ and $n_0-\\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\\ln m! \\approx m \\ln m - m$ for large $m$, to approximate (1):\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -(n_0 - \\delta n) \\ln (n_0 - \\delta n)-(n_0 + \\delta n) \\ln (n_0 + \\delta n) + 2 n_0 \\ln n_0\\,. \\tag{2}$$\nNow we use <em>Taylor expansion</em> to expand $\\ln(n_0 \\pm \\delta n)$ by treating $\\delta n$ as small compared to $n_0$. The expansion of $\\ln(x+\\epsilon)$ around $x$ is given by:\n$$ \\ln(x + \\epsilon) = \\ln x + \\frac{\\epsilon}{x} - \\frac{\\epsilon^2}{2x^2} + \\mathcal O\\left(\\left(\\frac{\\epsilon}{x}\\right)^3\\right)\\,.$$\nUsing this in (2), and keeping terms upto quadratic order in $\\delta n$ we find:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} - \\ln \\Omega_0 \\approx -\\frac{(\\delta n)^2}{n_0}\\,.$$\nSo the entropy of the perturbed system is given by:\n$$ \\ln \\Omega_{\\{\\mathbf n_2\\}} \\approx \\ln\\Omega_0 -\\frac{(\\delta n)^2}{n_0} \\,. \\tag{3}$$</p>\n\n<p>The reason why in (3) there is no term linear in $\\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p>\n" }, { "answer_id": 2098006, "score": 4, "text": "<p>We assume the Stirling approximation (first order) $$\\log(n!)\\approx n \\log(n) - n = g(n) \\tag{1}$$</p>\n\n<p>And the Taylor expansion:</p>\n\n<p>$$g(x)\\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\\frac{(x-x_0)^2}{2}=\\\\\n=x_0\\log(x_0)-x_0 + \\log(x_0)\\delta+\\frac{\\delta^2}{2 x_0} \\tag{2}$$</p>\n\n<p>where $\\delta = x-x_0$ (increment).</p>\n\n<p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\\delta$ is</p>\n\n<p>$$\\Delta_1(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0-\\delta)!)+\\ln(n_0!)=\\\\= \\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{3}$$</p>\n\n<p>The change in the other box that <em>increments</em> its counting (box 2) by $\\delta$ is</p>\n\n<p>$$\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\ln((n_0+\\delta)!)+\\ln(n_0!)=\\\\ =-\\log(n_0)\\delta-\\frac{\\delta^2}{2 n_0} \\tag{4}$$</p>\n\n<p>The total change is then $$\\Delta(\\ln\\Omega_{\\{n\\}})= \\Delta_1(\\ln\\Omega_{\\{n\\}})+\\Delta_2(\\ln\\Omega_{\\{n\\}})=-\\frac{\\delta^2}{n_0} \\tag{5}$$ </p>\n\n<p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because \"we are at a local maximum\" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p>\n\n<p>The original statement \n$$\\delta\\left(\\ln\\Omega_{\\{n\\}}\\right)\\approx \\ln\\Omega_0-\\frac{\\delta n^2}{2 n_0}$$\n seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first \"$\\delta$\" is right, but the initial term $\\ln\\Omega_0$ is wrong - sanity check: if $\\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p>\n\n<p>Edit: Regarding your edit: your \"equation in blue\" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>\n" } ]
Question: Taylor expansion using stirling&#39;s approximation Answer 1: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 2: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 3: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 4: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 5: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 6: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 7: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 8: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 9: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 10: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 11: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 12: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 13: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 14: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 15: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 16: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 17: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 18: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 19: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 20: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 21: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 22: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 23: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 24: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 25: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 26: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p> Answer 27: <p>Let us define the following two configurations:</p> <ul> <li>An "optimal" configuration: $\mathbf n_1 := [n_0, \cdots, n_0]$ .</li> <li>The perturbed configuration: $\mathbf n_2 := [n_0 - \delta n, n_0 + \delta n, n_0, \cdots, n_0]$ .</li> </ul> <p>Statistical weights of the two configurations are: $$ \Omega_0 := \Omega_{\{\mathbf n_1\}} = \frac{N!}{(n_0!)^M}\,, \qquad \Omega_{\{\mathbf n_2\}} = \frac{N!}{(n_0-\delta n)!(n_0 + \delta n)! (n_0!)^{M-2}} \,. $$ Taking log of these two weights we get: \begin{align} \ln \Omega_0 =&amp;\; \ln N! - M \ln n_0!\,, \\ \qquad \ln \Omega_{\{\mathbf n_2\}} =&amp;\; \ln N! - \ln (n_0-\delta n)! - \ln(n_0+\delta n)! - (M-2) \ln n_0!\,. \end{align} Log of the number of microstates is proportional to <em>entropy</em>, so let's call these two quantities the entropies of the two configurations (ignoring the proportionality constant, which is called the <em>Boltzmann constant</em>). Change in entropy due to perturbation is: $$\ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 = - \ln(n_0-\delta n)! - \ln(n_0 + \delta n)! + 2\ln n_0!\,. \tag{1}$$</p> <p>Assuming $n_0$ and $n_0-\delta n$ are both fairly large we use <em>Stirling's approximation</em>, nameley $\ln m! \approx m \ln m - m$ for large $m$, to approximate (1): $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -(n_0 - \delta n) \ln (n_0 - \delta n)-(n_0 + \delta n) \ln (n_0 + \delta n) + 2 n_0 \ln n_0\,. \tag{2}$$ Now we use <em>Taylor expansion</em> to expand $\ln(n_0 \pm \delta n)$ by treating $\delta n$ as small compared to $n_0$. The expansion of $\ln(x+\epsilon)$ around $x$ is given by: $$ \ln(x + \epsilon) = \ln x + \frac{\epsilon}{x} - \frac{\epsilon^2}{2x^2} + \mathcal O\left(\left(\frac{\epsilon}{x}\right)^3\right)\,.$$ Using this in (2), and keeping terms upto quadratic order in $\delta n$ we find: $$ \ln \Omega_{\{\mathbf n_2\}} - \ln \Omega_0 \approx -\frac{(\delta n)^2}{n_0}\,.$$ So the entropy of the perturbed system is given by: $$ \ln \Omega_{\{\mathbf n_2\}} \approx \ln\Omega_0 -\frac{(\delta n)^2}{n_0} \,. \tag{3}$$</p> <p>The reason why in (3) there is no term linear in $\delta n$ is very physical, an optimal configuration has the highest entropy, therefore change in entropy as you perturb from an optimal configuration must begin at least at the quadratic order.</p> Answer 28: <p>We assume the Stirling approximation (first order) $$\log(n!)\approx n \log(n) - n = g(n) \tag{1}$$</p> <p>And the Taylor expansion:</p> <p>$$g(x)\approx g(x_0)+g'(x_0)(x-x_0)+g''(x_0)\frac{(x-x_0)^2}{2}=\\ =x_0\log(x_0)-x_0 + \log(x_0)\delta+\frac{\delta^2}{2 x_0} \tag{2}$$</p> <p>where $\delta = x-x_0$ (increment).</p> <p>Then the change in the box that <em>decrements</em> its counting (box 1) by $\delta$ is</p> <p>$$\Delta_1(\ln\Omega_{\{n\}})=-\ln((n_0-\delta)!)+\ln(n_0!)=\\= \log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{3}$$</p> <p>The change in the other box that <em>increments</em> its counting (box 2) by $\delta$ is</p> <p>$$\Delta_2(\ln\Omega_{\{n\}})=-\ln((n_0+\delta)!)+\ln(n_0!)=\\ =-\log(n_0)\delta-\frac{\delta^2}{2 n_0} \tag{4}$$</p> <p>The total change is then $$\Delta(\ln\Omega_{\{n\}})= \Delta_1(\ln\Omega_{\{n\}})+\Delta_2(\ln\Omega_{\{n\}})=-\frac{\delta^2}{n_0} \tag{5}$$ </p> <p>(that's why we need to take a second-order Taylor expansion: because the linear terms cancel - not because "we are at a local maximum" - actually the result is valid for any $n_0$, not only for $n_0=N/M$)</p> <p>The original statement $$\delta\left(\ln\Omega_{\{n\}}\right)\approx \ln\Omega_0-\frac{\delta n^2}{2 n_0}$$ seems to have two errors: First, if mixes the value of the statistical weight with its change. If we are speaking of the change, then the first "$\delta$" is right, but the initial term $\ln\Omega_0$ is wrong - sanity check: if $\delta n=0$ the change should be zero). Furthermore, the result seems to have a missing two factor (indeed, the copied derivation emphasizes that the first variation is due to <em>one box</em>, but then it seems ot forget about the other one).</p> <p>Edit: Regarding your edit: your "equation in blue" is only correct if it's understood to be the change in one box (see eq $3$). If it's understood (as it seems) as the total change, then it's wrong. The net change is given by eq. $(5)$</p>
Calculate $\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx$
2,091,783
7
https://math.stackexchange.com/questions/2091783/calculate-int-02-pi-frac12-sinx-dx
<p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx$$</p> <p>$$\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$$</p> <p>$$z=\alpha(x)=e^{ix}$$ <br /></p> <p>$$\sin(x)=\frac{z^2-1}{2zi}$$ $$2+\sin(x)=\frac{4zi+z^2-1}{2zi}$$ $$\frac{1}{2+\sin(x)}=\frac{2zi}{4zi+z^2-1}$$</p> <p><br /></p> <p>$$\int_\alpha \frac{2zi}{4zi+z^2-1} \ \ \frac{1}{zi}$$</p> <p><br /></p> <p>$$4zi+z^2-1=0$$ $$z_1=\frac{-4i+2\sqrt{3}i}{2}$$ $$z_2=\frac{-4i-2\sqrt{3}i}{2}$$</p> <p><br /></p> <p>$$\rvert z_1 \rvert &lt;1$$ $$\rvert z_2 \rvert &gt;1$$</p> <p><br /> Using Residue theorem: <br /></p> <p>$$\int_\alpha \frac{2}{4zi+z^2-1}=2 \pi i \ \lim_{z\rightarrow-2i+\sqrt{3}i} \ \frac{2}{z+2i+\sqrt{3}i}=\frac{2 \pi}{\sqrt{3}}$$</p> <p><br/></p> <p>Is it correct?</p> <p>Thanks!</p>
[ { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" }, { "answer_id": 2091805, "score": 3, "text": "<p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\\sin\\space x$ into $\\tan\\space (x/2)$ $$\\sin(x) = \\frac{2\\cdot \\tan(x/2)}{1+\\tan^2(x/2)}$$.</p>\n" }, { "answer_id": 2091806, "score": 1, "text": "<p>Another way is </p>\n\n<p>Set $u=tg\\frac{x}{2}$ and $du=sec^2\\frac{x}{2}*\\frac{1}{2}dx \\rightarrow dx=\\frac{2}{1+u^2}dt$ and $sin(x)=\\frac{2u}{u^2+1}$</p>\n\n<p>$$\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} \\ dx=\\int_{-\\pi}^{\\pi} \\frac{1}{2+\\sin(x)} \\ dx= \\int_{-\\infty}^{\\infty} \\frac{2}{(u^2+1)(\\frac{2u}{u^2+1}+2)}=\\int_{-\\infty}^{\\infty} \\frac{1}{u^2+u+1}$$ </p>\n\n<p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p>\n" }, { "answer_id": 2091904, "score": 1, "text": "<p>\\begin{align}\n\\int_0^{2\\pi} \\frac{1}{2+\\sin(x)} dx&amp;=\\int_0^{2\\pi} \\frac{1}{1+\\sin^2(x/2)+\\cos^2(x/2)+2\\sin(x/2)\\cos(x/2)} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+(\\sin(x/2)+\\cos(x/2))^2} dx\\\\\n&amp;=\\int_0^{2\\pi} \\frac{1}{1+2\\sin^2(x/2+\\pi/4)} dx\\\\\n\\end{align}</p>\n\n<p>From here you should be able to continue recalling the behaviour of $\\arctan$.</p>\n" } ]
Question: Calculate $\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx$ Answer 1: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 2: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 3: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 4: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 5: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 6: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 7: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 8: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 9: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 10: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 11: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 12: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 13: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 14: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 15: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 16: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 17: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 18: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 19: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 20: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 21: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 22: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 23: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 24: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 25: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 26: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 27: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 28: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 29: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 30: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 31: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 32: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 33: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 34: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 35: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 36: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 37: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 38: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 39: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p> Answer 40: <p><strong>Yes, absolutely correct</strong> but you may try alternative approach by converting $\sin\space x$ into $\tan\space (x/2)$ $$\sin(x) = \frac{2\cdot \tan(x/2)}{1+\tan^2(x/2)}$$.</p> Answer 41: <p>Another way is </p> <p>Set $u=tg\frac{x}{2}$ and $du=sec^2\frac{x}{2}*\frac{1}{2}dx \rightarrow dx=\frac{2}{1+u^2}dt$ and $sin(x)=\frac{2u}{u^2+1}$</p> <p>$$\int_0^{2\pi} \frac{1}{2+\sin(x)} \ dx=\int_{-\pi}^{\pi} \frac{1}{2+\sin(x)} \ dx= \int_{-\infty}^{\infty} \frac{2}{(u^2+1)(\frac{2u}{u^2+1}+2)}=\int_{-\infty}^{\infty} \frac{1}{u^2+u+1}$$ </p> <p>And to solve this just complete the square and you will get something in function of $arctg(u)$</p> Answer 42: <p>\begin{align} \int_0^{2\pi} \frac{1}{2+\sin(x)} dx&amp;=\int_0^{2\pi} \frac{1}{1+\sin^2(x/2)+\cos^2(x/2)+2\sin(x/2)\cos(x/2)} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+(\sin(x/2)+\cos(x/2))^2} dx\\ &amp;=\int_0^{2\pi} \frac{1}{1+2\sin^2(x/2+\pi/4)} dx\\ \end{align}</p> <p>From here you should be able to continue recalling the behaviour of $\arctan$.</p>
A simple proof by induction $P(n) = \frac{6^{2n} - 3^n}{11} \in \mathbb{N}$
2,091,861
7
https://math.stackexchange.com/questions/2091861/a-simple-proof-by-induction-pn-frac62n-3n11-in-mathbbn
<p>Verify by induction that $P(n) = \frac{6^{2n} - 3^n}{11} \in \mathbb{N} \quad \forall n \ge 1 \in \mathbb{N}$</p> <p><em>Basis:</em> $P(1) \Rightarrow \frac{33}{11} \in \mathbb{N}$.</p> <p><em>Induction:</em> if the statement holds for some $n$ $\Rightarrow$ holds for $n+1$</p> <p>I would appreciate some small hints because I'm stuck at $P(n+1) = \frac{6^{2(n+1)} - 3^{n+1}}{11}$</p>
[ { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" }, { "answer_id": 2091897, "score": 9, "text": "<p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p>\n" }, { "answer_id": 2091889, "score": 6, "text": "<p>We have $P(n)=\\frac{6^{2n}-3^n}{11} \\in \\mathbb{N}$.</p>\n\n<p>You have proven that the base case is true.</p>\n\n<p>Now, assume true for $n=k$.</p>\n\n<p>$$6^{2k}-3^k=11p \\tag{1}$$</p>\n\n<p>Where $p \\in \\mathbb{N}$.</p>\n\n<p>True for $n=k+1$.</p>\n\n<p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \\cdot 6^{2k}-3 \\cdot 3^k \\tag{2}$</p>\n\n<p>Now, <strong>here comes the trick.</strong></p>\n\n<p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p>\n\n<p>$$36 \\cdot (11p+3^k)-3 \\cdot 3^k=11\\cdot 36p+36\\cdot 3^k-3 \\cdot 3^k=11 \\cdot 36p-33 \\cdot 3^k \\tag{3}$$</p>\n\n<p>Since $11\\cdot 36p$ is obviously divisible by $11$, and $33 \\cdot 3^k$ can be divided by $11$ to give $3 \\cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p>\n\n<p>Hence, we are done.</p>\n" }, { "answer_id": 2091870, "score": 5, "text": "<p>Because $\\displaystyle \\frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \\in \\mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\\frac {6^{2n+2}-3^{n+1}}{11} = \\frac {36 (3^n +11k )-3 (3^{n})}{11} =\\frac {33 (3^n) +11 (36k)}{11} \\in \\mathbb N $$ And thus the proof is completed. Hope it helps. </p>\n" }, { "answer_id": 2091872, "score": 2, "text": "<p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider\n \\begin{align} \n6^{2(n+1)} - 3^{n+1} &amp; = \\\\\n6^{2n} \\times 36 - 3 \\times 3^n &amp; = \\\\\n3(12 \\times 6^{2n} - 3^n )&amp;= \\\\\n3(11\\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \\ldots\n\\end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p>\n" }, { "answer_id": 2091876, "score": 2, "text": "<p><strong>HINT</strong></p>\n\n<p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p>\n\n<p>We have </p>\n\n<p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p>\n\n<p>We use induction: If $n-1$ is true, $n+1$ is true. </p>\n" }, { "answer_id": 2092076, "score": 1, "text": "<p>${\\rm mod}\\ 11\\!:\\ 6^{\\large 2}\\!\\equiv 3\\,\\Rightarrow\\, (6^{\\large 2})^{\\large N}\\!\\equiv 3^{\\large N} $ by the <a href=\"https://math.stackexchange.com/a/879262/242\">Congruence Power Rule</a> (it has a trivial inductive proof).</p>\n\n<p><strong>Remark</strong> $\\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility \"assembly language\" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href=\"https://math.stackexchange.com/a/1179145/242\">this answer.</a></p>\n" } ]
Question: A simple proof by induction $P(n) = \frac{6^{2n} - 3^n}{11} \in \mathbb{N}$ Answer 1: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 2: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 3: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 4: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 5: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 6: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 7: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 8: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 9: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 10: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 11: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 12: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 13: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 14: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 15: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 16: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 17: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 18: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 19: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 20: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 21: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 22: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 23: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 24: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 25: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 26: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 27: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 28: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 29: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 30: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 31: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 32: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 33: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 34: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 35: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 36: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 37: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 38: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 39: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 40: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 41: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 42: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 43: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 44: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 45: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 46: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 47: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 48: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 49: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 50: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 51: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 52: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 53: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 54: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 55: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 56: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 57: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 58: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 59: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 60: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 61: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 62: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 63: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 64: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 65: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 66: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 67: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 68: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 69: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 70: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 71: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 72: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 73: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 74: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 75: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 76: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 77: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 78: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p> Answer 79: <p>You don't really need induction, or just use some simple production, as $$6^{2n}-3^n=3^n(12^n-1)$$</p> Answer 80: <p>We have $P(n)=\frac{6^{2n}-3^n}{11} \in \mathbb{N}$.</p> <p>You have proven that the base case is true.</p> <p>Now, assume true for $n=k$.</p> <p>$$6^{2k}-3^k=11p \tag{1}$$</p> <p>Where $p \in \mathbb{N}$.</p> <p>True for $n=k+1$.</p> <p>$6^{2(k+1)}-3^{k+1}=6^{2k+2}-3^{k+1}=36 \cdot 6^{2k}-3 \cdot 3^k \tag{2}$</p> <p>Now, <strong>here comes the trick.</strong></p> <p>We can substitute from $6^{2k}$ from equation $(1)$ into equation $(2)$.</p> <p>$$36 \cdot (11p+3^k)-3 \cdot 3^k=11\cdot 36p+36\cdot 3^k-3 \cdot 3^k=11 \cdot 36p-33 \cdot 3^k \tag{3}$$</p> <p>Since $11\cdot 36p$ is obviously divisible by $11$, and $33 \cdot 3^k$ can be divided by $11$ to give $3 \cdot 3^k$, the whole of equation $(3)$ is divisible by $11$.</p> <p>Hence, we are done.</p> Answer 81: <p>Because $\displaystyle \frac{6^{2n}-3^n}{11}$ is a natural number, we can write for some $k \in \mathbb N, 6^{2n} =3^n +11k $. Then our proof becomes $$\frac {6^{2n+2}-3^{n+1}}{11} = \frac {36 (3^n +11k )-3 (3^{n})}{11} =\frac {33 (3^n) +11 (36k)}{11} \in \mathbb N $$ And thus the proof is completed. Hope it helps. </p> Answer 82: <p>So the statement $P(n)$ could also be formulated that $11$ divides $6^{2n} -3^n$. So $P(n+1)$ would consider \begin{align} 6^{2(n+1)} - 3^{n+1} &amp; = \\ 6^{2n} \times 36 - 3 \times 3^n &amp; = \\ 3(12 \times 6^{2n} - 3^n )&amp;= \\ 3(11\times 6^{2n} + 6^{2n} - 3^{n}) &amp; = \ldots \end{align} Can you use $P(n)$ in order to prove $P(n+1)$?</p> Answer 83: <p><strong>HINT</strong></p> <p>From $$x^{n+1}-y^{n+1}=(x^n+y^n)(x-y)+xy(x^{n-1}+y^{n-1})$$</p> <p>We have </p> <p>$$6^{2n+2}-3^{n+1}=(6^{2n}+3^{n})(6^2-3)+108 (6^{2n-2}+3^{n-1})$$</p> <p>We use induction: If $n-1$ is true, $n+1$ is true. </p> Answer 84: <p>${\rm mod}\ 11\!:\ 6^{\large 2}\!\equiv 3\,\Rightarrow\, (6^{\large 2})^{\large N}\!\equiv 3^{\large N} $ by the <a href="https://math.stackexchange.com/a/879262/242">Congruence Power Rule</a> (it has a trivial inductive proof).</p> <p><strong>Remark</strong> $\ $ Some of the other answers essentially do the same, except they eliminate the congruence language, and directly inline the proof of the Power Rule (or its inductive basis - the Congruence Product Rule), i.e. they are presented in divisbility "assembly language" instead of higher-level arithmetical language. This viewpoint is discussed at length in <a href="https://math.stackexchange.com/a/1179145/242">this answer.</a></p>
If $x$ divides $z$, $y$ divides $z$ and $x$ and $y$ are relatively prime, then $xy$ divides $z$.
2,092,325
7
https://math.stackexchange.com/questions/2092325/if-x-divides-z-y-divides-z-and-x-and-y-are-relatively-prime-then
<p>I've been stuck on this for far too long, but it isn't proved in my lecture notes so it's supposed to be elementary...</p> <p>Let $x, y, z$ be integers such that $x$ divides $z$, $y$ divides $z$ and $x$ and $y$ are relatively prime, then $xy$ divides $z$.</p> <p>Thanks a lot !</p>
[ { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" }, { "answer_id": 2092341, "score": 7, "text": "<p>$\\gcd(x,y)=1$ $\\implies$ there are integers $s,t$ such that $sx+ty=1$.</p>\n\n<p>Let $z=ax=by$. Then $sx+ty=1$ $\\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\\mid z$.</p>\n" }, { "answer_id": 2092666, "score": 2, "text": "<p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>\n" } ]
Question: If $x$ divides $z$, $y$ divides $z$ and $x$ and $y$ are relatively prime, then $xy$ divides $z$. Answer 1: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 2: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 3: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 4: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 5: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 6: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 7: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 8: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 9: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 10: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 11: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 12: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 13: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 14: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 15: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 16: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 17: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 18: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 19: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 20: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 21: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 22: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 23: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 24: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 25: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 26: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p> Answer 27: <p>$\gcd(x,y)=1$ $\implies$ there are integers $s,t$ such that $sx+ty=1$.</p> <p>Let $z=ax=by$. Then $sx+ty=1$ $\implies$ $sxz+tyz=z$, i.e. $(sb+ta)xy=z$. Thus $xy\mid z$.</p> Answer 28: <p>This also appears from Euclid's Elements: If two numbers divide a third, their lcm also divides it [VII, 35]. If two numbers are relatively prime, their product is their lcm [VII, 34]. Therefore, if two relatively prime numbers divide a third number, their product also divides it. </p>
Why any rational number can be written as $P/Q$?
2,092,404
7
https://math.stackexchange.com/questions/2092404/why-any-rational-number-can-be-written-as-p-q
<p>Why any rational number can be written as $P/Q$? and how to prove that where $p,q$ are integers and at least one of $p,q$ is odd?</p> <p>I can see that at least one of $p,q$ is odd, but I don't how to write down the proof properly.</p> <p>I did is:<br> Let $x$ be a rational number.<br> Case 1: $x$ is integer<br> then $x = x/1$ where $1$ is an odd number.<br> Case 2: $x$ is not integer<br> write $x = m/n$<br> if $m,n$ are both even, then the $2$ cancels out, and repeat the case $2$ process until at least one of the two numbers is odd.</p>
[ { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" }, { "answer_id": 2092430, "score": 5, "text": "<p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\\ne0$ can be written as\n$$\nN=2^an\n$$\nwhere $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\\ne0$.</p>\n\n<p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p>\n\n<ol>\n<li><p>$a&gt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li>\n<li><p>$a=b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li>\n<li><p>$a&lt;b$ and $\\dfrac{P}{Q}=\\dfrac{2^ap}{2^bq}=\\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li>\n</ol>\n\n<p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p>\n" }, { "answer_id": 2092412, "score": 3, "text": "<p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\\frac{p}{q}$.</p>\n\n<p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\\ne 0$, but if $q=0$, it's already not a rational number.)</p>\n" } ]
Question: Why any rational number can be written as $P/Q$? Answer 1: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 2: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 3: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 4: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 5: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 6: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 7: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 8: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 9: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 10: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 11: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 12: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 13: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 14: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 15: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 16: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 17: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 18: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 19: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 20: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 21: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 22: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 23: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 24: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 25: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 26: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p> Answer 27: <p>By the fundamental theorem of arithmetic (uniqueness and existence of decomposition into prime factors), any integer $N\ne0$ can be written as $$ N=2^an $$ where $n$ is odd (the exponent $a$ is a non negative integer and it can be $0$, precisely when $N$ is already odd). The case $P/Q$ where $P=0$ is easy: $P/Q=0/1$. So we can assume also $P\ne0$.</p> <p>Now we have $P=2^ap$ and $Q=2^bq$, with $p$ and $q$ odd, three cases are possible:</p> <ol> <li><p>$a&gt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{2^{a-b}p}{q}$ where the denominator is odd;</p></li> <li><p>$a=b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{q}$ where both the numerator and the denominator are odd;</p></li> <li><p>$a&lt;b$ and $\dfrac{P}{Q}=\dfrac{2^ap}{2^bq}=\dfrac{p}{2^{b-a}q}$ where the numerator is odd.</p></li> </ol> <p>This is just a better formalized version of your argument: the key is that you can't keep dividing by $2$ <em>ad infinitum</em>, so the process eventually stops.</p> Answer 28: <p>Because a <em>rational</em> number can be expressed as a <em>ratio</em>, that is to say, any rational number can be represented $\frac{p}{q}$.</p> <p>One of $p,q$ must be odd, because if they were both even, they wouldn't be coprime, and therefor, not simplified. (Assuming $q\ne 0$, but if $q=0$, it's already not a rational number.)</p>
Dropping a Lowest Score
2,092,688
7
https://math.stackexchange.com/questions/2092688/dropping-a-lowest-score
<p>I have a lab class, where I drop the lowest score students receive, easy enough. Except, all items are not weighted equally, (lab A might be worth 10 points, while lab B might be worth 23 points, and so on).</p> <p>Right now, what I do in an excel file is calculate the total points earned and the maximum points, except in each instance I remove a different assignment from the calculation. Then compare the various percentages and drop the one that yields the highest overall percentage.</p> <p><strong>TL;DR, here is the main question part ;)</strong></p> <p>Is this a problem that can be solved more easily, or can I answer the question of which to drop with a formula? I'd love to include the calculation in my grade book so it happens automatically. Right now, I have to go in and drop an assignment manually for each student, since my only option is to drop "lowest score" which doesn't work since 5/10 has a different impact than 3/30. <em>(I realize I could scale everything and make them all worth the same amount, but that complicates other parts of the grade for me, so isn't ideal)</em></p> <p>I've included a screen shot of what I do in excel for hopeful clarity. <a href="https://i.stack.imgur.com/4vKvj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4vKvj.png" alt="Excel Screenshot"></a></p> <p>The bottom three rows are what I look at.</p> <p><em>Total Score w/o:</em> Total earned points (from row 2) without the lab corresponding to that column</p> <p><em>Total Max w/o:</em> Total maximum points possible (from row 3) without the lab corresponding to that column</p> <p><em>Combined Percent w/o:</em> Just the values from $\displaystyle\frac{\text{Total Score w/o}}{\text{Max Score w/o}}$ for each column.</p> <p>I have conditional highlighting which shows me the highest percentage, so in this case, I would drop the student "Hydrate" lab assignment from their grade.</p> <p>*Note: I will confess I really wasn't sure what tags I should use. This stackexchange is way out of my comfort zone, so most terms were not familiar. Feel free to change them to whatever is most appropriate.</p>
[ { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" }, { "answer_id": 2092819, "score": 4, "text": "<p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\\frac S T+{St_i-s_iT\\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\\over T(T-t_i)}$. </p>\n\n<p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p>\n\n<p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p>\n" }, { "answer_id": 2092748, "score": 1, "text": "<p>Figure out the the average value of an assignment, say 25 points. \nLeave all the scores intact, but before you average, deflate the\ndenominator by 25 points. Truncate any score that happens to go over\n100 down to 100. </p>\n\n<p>This gives a break to students who do poorly on an assignment (or\nmaybe two), gives a slight advantage to students who perform\nconsistently, and makes your life a lot easier. </p>\n\n<p>You can tell students you're \"in effect\" dropping the lowest\nscore, and they will probably never be able to figure out that's\nnot exactly what you're doing. Or tell them exactly what they're\ndoing and they can do the math to see they're still getting a\nroughly equivalent break for a missed or botched assignment.</p>\n\n<p>I've done this for the past 10 years with no complaints.</p>\n" } ]
Question: Dropping a Lowest Score Answer 1: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 2: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 3: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 4: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 5: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 6: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 7: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 8: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 9: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 10: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 11: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 12: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 13: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 14: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 15: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 16: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 17: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 18: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 19: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 20: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 21: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 22: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 23: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 24: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 25: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 26: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p> Answer 27: <p>Let $t_i$ and $s_i$ be the possible points and actual points scored, respectively, for each individual assignment, and let $T$ and $S$ be the respective sums over all assignments. Assuming that the goal is to produce the highest percentage for the student, you’re trying to maximize ${S-s_i\over T-t_i}$. With a bit of algebraic manipulation, this can be rewritten as $\frac S T+{St_i-s_iT\over T(T-t_i)}$, so drop the assignment that maximizes ${St_i-s_iT\over T(T-t_i)}$. </p> <p>For the small example in your comment to Travis’ answer, these values are $0.129$, $-0.017$, $-0.026$ and $-0.060$, so the first score should be dropped. For the slightly larger data set in those comments, this method selects the 15/35 score ($0.036$), just as you had originally computed. For the example in your question, “Hydrate” is the winner with a value of $0.045$, so it should be dropped, which also agrees with the method you’d been using. </p> <p>Of course, if you’re doing this in a spreadsheet anyway, you can compute ${S-s_i\over T-t_i}$ for each assignment and have the spreadsheet find the highest value for you, as Henry points out in his comment to your question.</p> Answer 28: <p>Figure out the the average value of an assignment, say 25 points. Leave all the scores intact, but before you average, deflate the denominator by 25 points. Truncate any score that happens to go over 100 down to 100. </p> <p>This gives a break to students who do poorly on an assignment (or maybe two), gives a slight advantage to students who perform consistently, and makes your life a lot easier. </p> <p>You can tell students you're "in effect" dropping the lowest score, and they will probably never be able to figure out that's not exactly what you're doing. Or tell them exactly what they're doing and they can do the math to see they're still getting a roughly equivalent break for a missed or botched assignment.</p> <p>I've done this for the past 10 years with no complaints.</p>
Prove $\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\frac{\pi ^{2}}{48}+\frac{1}{4}\ln^22$.
2,092,814
7
https://math.stackexchange.com/questions/2092814/prove-sum-k-1-infty-frac-left-1-right-kk-sum-j-12k-frac
<p>How to prove the following series, $$\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\frac{\pi ^{2}}{48}+\frac{1}{4}\ln^22$$ I know a formula which might be usful. $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\mathrm{d}x$$ any hint will be appreciate.</p>
[ { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" }, { "answer_id": 2092820, "score": 6, "text": "<p>Yes! As you mentioned, it's a very useful formula.</p>\n\n<p>Use \n$$\\sum_{j=1}^{n}\\frac{\\left ( -1 \\right )^{j-1}}{j}=\\ln 2+\\left ( -1 \\right )^{n-1}\\int_{0}^{1}\\frac{x^{n}}{1+x}\\, \\mathrm{d}x$$\nwe get\n\\begin{align*}\n&amp;\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\sum_{j=1}^{2k}\\frac{\\left ( -1 \\right )^{j}}{j}=\\ln 2\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}-\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k}\\int_{0}^{1}\\frac{x^{2k}}{1+x}\\, \\mathrm{d}x\\\\\n&amp;=\\ln^22+\\int_{0}^{1}\\frac{1}{1+x}\\sum_{k=1}^{\\infty }\\frac{\\left ( -x^{2} \\right )^{k}}{k}\\, \\mathrm{d}x=\\ln^22-\\int_{0}^{1}\\frac{\\ln\\left ( 1+x^{2} \\right )}{1+x}\\, \\mathrm{d}x\n\\end{align*}\nlet\n$$f\\left ( t \\right )=\\int_{0}^{1}\\frac{\\ln\\left ( 1+tx^{2} \\right )}{1+x}\\,\\mathrm{d}x$$\nthen \n\\begin{align*}\n f{}'\\left ( t \\right ) &amp;=\\int_{0}^{1}\\frac{x^{2}}{\\left ( 1+x \\right )\\left ( 1+tx^{2} \\right )}\\,\\mathrm{d}x \\\\\n &amp;=\\frac{1}{t+1}\\int_{0}^{1}\\frac{x-1}{1+tx^{2}}\\,\\mathrm{d}x+\\frac{1}{t+1}\\int_{0}^{1}\\frac{\\mathrm{d}x}{x+1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+tx^{2} \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t}x \\right )+\\ln\\left ( x+1 \\right ) \\right ]_{0}^{1} \\\\\n &amp;=\\frac{1}{t+1}\\left [ \\frac{1}{2t}\\ln\\left ( 1+t \\right )-\\frac{1}{\\sqrt{t}}\\arctan\\left ( \\sqrt{t} \\right )+\\ln 2 \\right ]\n\\end{align*}\nIntegrate back\n$$\\Rightarrow f\\left ( t \\right )=\\frac{1}{2}\\left [ -\\mathrm{Li}_{2}\\left ( -t \\right )-\\frac{1}{2}\\ln^{2}\\left ( t+1 \\right ) \\right ]-\\arctan^{2}\\sqrt{t}+\\ln 2\\ln\\left ( t+1 \\right )$$\nthen let $t=1$ and use $\\displaystyle \\mathrm{Li}_{2}\\left ( -1 \\right )=-\\frac{\\pi ^{2}}{12}$, you will get the answer as wanted.</p>\n\n<p><strong>EDIT:</strong>\n$$\\mathrm{Li}_{2}\\left ( -1 \\right )=\\sum_{k=1}^{\\infty }\\frac{\\left ( -1 \\right )^{k}}{k^{2}}=\\sum_{k=1}^{\\infty }\\frac{1}{k^{2}}-2\\sum_{k=1}^{\\infty }\\frac{1}{\\left ( 2n-1 \\right )^{2}}=\\frac{\\pi ^{2}}{6}-2\\cdot \\frac{\\pi ^{2}}{8}=-\\frac{\\pi ^{2}}{12}$$</p>\n" } ]
Question: Prove $\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\frac{\pi ^{2}}{48}+\frac{1}{4}\ln^22$. Answer 1: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 2: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 3: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 4: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 5: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 6: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 7: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 8: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 9: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 10: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 11: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 12: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 13: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p> Answer 14: <p>Yes! As you mentioned, it's a very useful formula.</p> <p>Use $$\sum_{j=1}^{n}\frac{\left ( -1 \right )^{j-1}}{j}=\ln 2+\left ( -1 \right )^{n-1}\int_{0}^{1}\frac{x^{n}}{1+x}\, \mathrm{d}x$$ we get \begin{align*} &amp;\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\sum_{j=1}^{2k}\frac{\left ( -1 \right )^{j}}{j}=\ln 2\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}-\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k}\int_{0}^{1}\frac{x^{2k}}{1+x}\, \mathrm{d}x\\ &amp;=\ln^22+\int_{0}^{1}\frac{1}{1+x}\sum_{k=1}^{\infty }\frac{\left ( -x^{2} \right )^{k}}{k}\, \mathrm{d}x=\ln^22-\int_{0}^{1}\frac{\ln\left ( 1+x^{2} \right )}{1+x}\, \mathrm{d}x \end{align*} let $$f\left ( t \right )=\int_{0}^{1}\frac{\ln\left ( 1+tx^{2} \right )}{1+x}\,\mathrm{d}x$$ then \begin{align*} f{}'\left ( t \right ) &amp;=\int_{0}^{1}\frac{x^{2}}{\left ( 1+x \right )\left ( 1+tx^{2} \right )}\,\mathrm{d}x \\ &amp;=\frac{1}{t+1}\int_{0}^{1}\frac{x-1}{1+tx^{2}}\,\mathrm{d}x+\frac{1}{t+1}\int_{0}^{1}\frac{\mathrm{d}x}{x+1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+tx^{2} \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t}x \right )+\ln\left ( x+1 \right ) \right ]_{0}^{1} \\ &amp;=\frac{1}{t+1}\left [ \frac{1}{2t}\ln\left ( 1+t \right )-\frac{1}{\sqrt{t}}\arctan\left ( \sqrt{t} \right )+\ln 2 \right ] \end{align*} Integrate back $$\Rightarrow f\left ( t \right )=\frac{1}{2}\left [ -\mathrm{Li}_{2}\left ( -t \right )-\frac{1}{2}\ln^{2}\left ( t+1 \right ) \right ]-\arctan^{2}\sqrt{t}+\ln 2\ln\left ( t+1 \right )$$ then let $t=1$ and use $\displaystyle \mathrm{Li}_{2}\left ( -1 \right )=-\frac{\pi ^{2}}{12}$, you will get the answer as wanted.</p> <p><strong>EDIT:</strong> $$\mathrm{Li}_{2}\left ( -1 \right )=\sum_{k=1}^{\infty }\frac{\left ( -1 \right )^{k}}{k^{2}}=\sum_{k=1}^{\infty }\frac{1}{k^{2}}-2\sum_{k=1}^{\infty }\frac{1}{\left ( 2n-1 \right )^{2}}=\frac{\pi ^{2}}{6}-2\cdot \frac{\pi ^{2}}{8}=-\frac{\pi ^{2}}{12}$$</p>
Skyscraper sheaves cohomology
2,092,882
7
https://math.stackexchange.com/questions/2092882/skyscraper-sheaves-cohomology
<p>Let $X$ be a topological space and $G$ an abelian group. Denote by $\mathcal{S}$ the skyscraper sheaf with group $G$ at the point $x\in X$. How I can prove that $\mathcal{S}$ has not cohomology, i.e $H^i(X,\mathcal{S})=0, \: \forall i&gt;0$ ?</p>
[ { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" }, { "answer_id": 2093098, "score": 7, "text": "<p>A skyscraper sheaf is flasque, hence has no cohomology: <a href=\"https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y\" rel=\"noreferrer\">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p>\n" }, { "answer_id": 2602159, "score": 6, "text": "<p>You can also see this using Čech cohomology:</p>\n\n<p>Consider an open cover $\\mathfrak{U}=(U_i)_{i\\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\\mathfrak{U}'$ consisting of $U_0$ and $U_i\\setminus\\{x\\}$ for all $i\\in I$. Then $\\mathfrak{U}'$ is a refinement of $\\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p>\n\n<p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p>\n\n<p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p>\n" }, { "answer_id": 2092935, "score": 3, "text": "<p>These <a href=\"https://www.ma.utexas.edu/users/iganev/skyscraper.pdf\" rel=\"nofollow noreferrer\">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \\to I^\\bullet$ of abelian groups. Then $\\pi_*G \\to \\pi_*I^\\bullet$ is an injective resolution, where $\\pi:{\\ast} \\to X$ is the inclusion of a point. Note that $\\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\\pi_*G \\to \\pi_*I^\\bullet$, which just gives back the resolution $G\\to I^\\bullet$, which shows that the higher cohomology of $\\pi_*G$ vanishes.</p>\n" } ]
Question: Skyscraper sheaves cohomology Answer 1: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 2: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 3: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 4: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 5: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 6: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 7: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 8: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 9: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 10: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 11: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 12: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 13: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 14: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 15: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 16: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 17: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 18: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 19: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 20: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 21: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 22: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 23: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 24: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 25: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 26: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 27: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 28: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 29: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 30: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 31: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 32: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 33: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 34: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 35: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 36: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 37: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 38: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 39: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p> Answer 40: <p>A skyscraper sheaf is flasque, hence has no cohomology: <a href="https://books.google.fr/books/about/Algebraic_Geometry.html?id=3rtX9t-nnvwC&amp;redir_esc=y" rel="noreferrer">Hartshorne</a> Chap. III, Prop.2.5, page 208.</p> Answer 41: <p>You can also see this using Čech cohomology:</p> <p>Consider an open cover $\mathfrak{U}=(U_i)_{i\in I}$ of your space $X$. You can always refine this cover so that only one of the sets $U_i$ contains the point $x$: Pick a set $U_0$ containing $x$ and consider the cover $\mathfrak{U}'$ consisting of $U_0$ and $U_i\setminus\{x\}$ for all $i\in I$. Then $\mathfrak{U}'$ is a refinement of $\mathfrak{U}$ and $x$ is only contained in the set $U_0$.</p> <p>In particular, $x$ is not contained in any intersection of two or more distinct sets in $\mathfrak{U}'$, so the scyscraper sheaf has no sections over these. Hence all higher Čech-cocylces are $0$ and all higher cohomology groups vanish.</p> <p>(I believe this approach can also be found in Forster's book on Riemann Surfaces.)</p> Answer 42: <p>These <a href="https://www.ma.utexas.edu/users/iganev/skyscraper.pdf" rel="nofollow noreferrer">notes</a> probably have enough detail to give you what you want. I'll assume you mean $G$ to be an abelian group. Basically the argument goes like this: if $G$ is an abelian group, take an injective resolution $G \to I^\bullet$ of abelian groups. Then $\pi_*G \to \pi_*I^\bullet$ is an injective resolution, where $\pi:{\ast} \to X$ is the inclusion of a point. Note that $\pi_*G$ is your skyscraper sheaf. To compute $H^i(X,G)$ take global sections of $\pi_*G \to \pi_*I^\bullet$, which just gives back the resolution $G\to I^\bullet$, which shows that the higher cohomology of $\pi_*G$ vanishes.</p>
Yoneda lemma and (locally) small categories
2,092,884
7
https://math.stackexchange.com/questions/2092884/yoneda-lemma-and-locally-small-categories
<p>A category is locally small if for all objects $A, B$, $\textrm{Hom}(A,B)$ is a set. A category is small if its objects and morphisms are all elements of a single set.</p> <p>Wikipedia defines a functor category $\mathcal D^{\mathcal C}$ only for $\mathcal C$ a small category (<a href="https://en.wikipedia.org/wiki/Functor_category" rel="noreferrer">https://en.wikipedia.org/wiki/Functor_category</a>). Why is this? If $\mathcal C$ is instead locally small, does it somehow not make sense to talk about the category of functors from $\mathcal C$ to $\mathcal D$? </p> <p>Also, what does this mean for the Yoneda lemma? Do you need for $\mathcal C$ to be a small category in order for the statement "$\mathcal C$ is antiequivalent to the full subcategory of $\mathcal D^{\mathcal C}$ consisting of representable functors" to make sense?</p>
[ { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" }, { "answer_id": 2093693, "score": 7, "text": "<p>The reason <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> is usually only defined for small categories <span class=\"math-container\">$\\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href=\"https://arxiv.org/abs/0810.1279\" rel=\"noreferrer\">Set Theory for Category Theory</a>).</p>\n<p>Regarding the Yoneda lemma, what you wrote seems confused: <span class=\"math-container\">$\\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class=\"math-container\">$\\mathcal D^{\\mathcal C}$</span> (e.g. it's certainly not if <span class=\"math-container\">$\\mathcal D=\\mathbf 1\\neq\\mathcal C$</span> where <span class=\"math-container\">$\\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class=\"math-container\">$\\mathcal C$</span>) <span class=\"math-container\">$\\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p>\n<p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class=\"math-container\">$\\mathcal F$</span> on a category <span class=\"math-container\">$\\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to <span class=\"math-container\">$F$</span> are in natural bijection with the class <span class=\"math-container\">$\\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class=\"math-container\">$\\mathcal C$</span>.</p>\n<p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> is representable functor to <span class=\"math-container\">$\\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(X,-)$</span> to a set-valued functor <span class=\"math-container\">$F$</span> are in natural bijection with the <em>set</em> <span class=\"math-container\">$F(X)$</span>.</p>\n<p>The <strong>cocompletion</strong> of a category <span class=\"math-container\">$\\mathcal C$</span> is a functor <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> initial among functors <span class=\"math-container\">$\\mathcal C\\to\\mathcal D$</span> for <span class=\"math-container\">$\\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class=\"math-container\">$\\mathcal C\\to\\hat{\\mathcal C}$</span> of a locally small category, which always exists (and <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is constructed by identifying <span class=\"math-container\">$\\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class=\"math-container\">$\\mathrm{Hom}_{\\mathcal C}(-,X)$</span>, identifying the objects of <span class=\"math-container\">$\\hat{\\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class=\"math-container\">$\\mathrm{Hom}_{\\hat{\\mathcal C}}(J,K)=\\mathrm{Nat}(\\mathrm{colim}_j\\mathrm{Hom}(-,X_j),\\mathrm{colim}_k\\mathrm{Hom}(-,X_k))=\\mathrm{lim}_j\\mathrm{colim}_k\\mathrm{Hom}_{\\mathcal C}(X_k,X_j)$</span>.</p>\n<p>In particular, the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>, even if the latter is large or does not exist.\nIn fact, a category <span class=\"math-container\">$\\mathcal C$</span> is equivalent to a small category if and only if <span class=\"math-container\">$\\mathcal C$</span> and <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span> are both locally small, in which case the cocompletion <span class=\"math-container\">$\\hat{\\mathcal C}$</span> is all of <span class=\"math-container\">$\\mathcal Set^{\\mathcal C}$</span>; for a proof see <a href=\"http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf\" rel=\"noreferrer\">On the Size of Categories</a> by Freyd and Street.</p>\n" }, { "answer_id": 2092974, "score": 5, "text": "<p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\\mathbf{Set}\\to\\mathbf{Set}$ that maps all sets to $1$ except that $\\emptyset$ gets mapped to $\\emptyset$. Then $\\mathbf{Set}^\\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\\mathcal{D}^\\mathcal{C}$ exists for non-small $\\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href=\"https://ncatlab.org/nlab/show/large+category#properties\" rel=\"noreferrer\">foundational assumptions</a>. I'll assume local smallness for the remainder.</p>\n\n<p>As for the Yoneda lemma, it means that if $\\mathcal{C}$ isn't small then the category $\\mathbf{Set}^\\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-\"set\" of that category in this context. It may be a \"large functor\" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\\mathsf{Nat}(\\mathsf{hom}(-,C), F) \\cong FC$, it's just that $\\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\\mathsf{Nat}(F,G)$ will be a \"proper class\" and so part of proving the Yoneda lemma would be proving that $\\mathsf{Nat}(\\mathsf{hom}(-,C), F)$ is, in fact, a set.</p>\n\n<p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a \"conservativity\" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p>\n\n<p>You may find <a href=\"http://www.brics.dk/RS/01/27/\" rel=\"noreferrer\">A Higher-Order Calculus for Categories</a> interesting.</p>\n" } ]
Question: Yoneda lemma and (locally) small categories Answer 1: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 2: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 3: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 4: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 5: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 6: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 7: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 8: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 9: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 10: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 11: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 12: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 13: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 14: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 15: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 16: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 17: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 18: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 19: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 20: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 21: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 22: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 23: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 24: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 25: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 26: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p> Answer 27: <p>The reason <span class="math-container">$\mathcal D^{\mathcal C}$</span> is usually only defined for small categories <span class="math-container">$\mathcal C$</span> is that it is only for small categories that the category tends to exist more or less independently of the notion of classes chosen (e.g. all functor categories always exist if you use a Grothendieck universe, but if you use &quot;syntactic&quot; classes it is only functor categories from small categories that are guaranteed to exist; for more on the various notions of classes see Mike Shulman's expository article <a href="https://arxiv.org/abs/0810.1279" rel="noreferrer">Set Theory for Category Theory</a>).</p> <p>Regarding the Yoneda lemma, what you wrote seems confused: <span class="math-container">$\mathcal C$</span> is not anti-equivalent to a subcategory of an <em>arbitrary</em> <span class="math-container">$\mathcal D^{\mathcal C}$</span> (e.g. it's certainly not if <span class="math-container">$\mathcal D=\mathbf 1\neq\mathcal C$</span> where <span class="math-container">$\mathbf 1$</span> is the category with one object). Furthermore, the statement that (for small <span class="math-container">$\mathcal C$</span>) <span class="math-container">$\mathcal C$</span> is anti-equivalent to the subcategory of representable functors in <span class="math-container">$\mathcal Set^{\mathcal C}$</span> is a special case of the Yoneda embedding, not the Yoneda lemma.</p> <p>The <strong>Yoneda lemma</strong> states that if you have a class-valued copresheaf <span class="math-container">$\mathcal F$</span> on a category <span class="math-container">$\mathcal C$</span>, then the natural transformations from the representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to <span class="math-container">$F$</span> are in natural bijection with the class <span class="math-container">$\mathcal F(X)$</span>. One consequence is that the category of representable class-valued copresheaves is isomorphic to the opposite category of <span class="math-container">$\mathcal C$</span>.</p> <p>A category is <strong>locally small</strong> if each representable class-valued copresheaf <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> is representable functor to <span class="math-container">$\mathcal Set$</span>. Since sets are small classes, the Yoneda lemma implies that the natural transformations from the representable set-valued functor <span class="math-container">$\mathrm{Hom}_{\mathcal C}(X,-)$</span> to a set-valued functor <span class="math-container">$F$</span> are in natural bijection with the <em>set</em> <span class="math-container">$F(X)$</span>.</p> <p>The <strong>cocompletion</strong> of a category <span class="math-container">$\mathcal C$</span> is a functor <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> initial among functors <span class="math-container">$\mathcal C\to\mathcal D$</span> for <span class="math-container">$\mathcal D$</span> a category complete under small colimits. The <strong>Yoneda embedding</strong> is the cocompletion <span class="math-container">$\mathcal C\to\hat{\mathcal C}$</span> of a locally small category, which always exists (and <span class="math-container">$\hat{\mathcal C}$</span> is actually locally small). Explicitly, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is constructed by identifying <span class="math-container">$\mathcal C$</span> with the category of representable (contravariant) set-valued functors <span class="math-container">$\mathrm{Hom}_{\mathcal C}(-,X)$</span>, identifying the objects of <span class="math-container">$\hat{\mathcal C}$</span> with small diagrams of representable functors, then using the facts that presheaves send colimits to limits and that colimits of functors are computed pointwise, to identify <span class="math-container">$\mathrm{Hom}_{\hat{\mathcal C}}(J,K)=\mathrm{Nat}(\mathrm{colim}_j\mathrm{Hom}(-,X_j),\mathrm{colim}_k\mathrm{Hom}(-,X_k))=\mathrm{lim}_j\mathrm{colim}_k\mathrm{Hom}_{\mathcal C}(X_k,X_j)$</span>.</p> <p>In particular, the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> of a locally small category is always a full locally small &quot;subcategory&quot; of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>, even if the latter is large or does not exist. In fact, a category <span class="math-container">$\mathcal C$</span> is equivalent to a small category if and only if <span class="math-container">$\mathcal C$</span> and <span class="math-container">$\mathcal Set^{\mathcal C}$</span> are both locally small, in which case the cocompletion <span class="math-container">$\hat{\mathcal C}$</span> is all of <span class="math-container">$\mathcal Set^{\mathcal C}$</span>; for a proof see <a href="http://www.tac.mta.ca/tac/volumes/1995/n9/v1n9.pdf" rel="noreferrer">On the Size of Categories</a> by Freyd and Street.</p> Answer 28: <p>It's provable that the category of functors from a small category is locally small while it is not necessarily the case for a locally small category. For example, let $F$ be the functor $\mathbf{Set}\to\mathbf{Set}$ that maps all sets to $1$ except that $\emptyset$ gets mapped to $\emptyset$. Then $\mathbf{Set}^\mathbf{Set}(F,Id)$ is essentially the class of all sets. Whether the category $\mathcal{D}^\mathcal{C}$ exists for non-small $\mathcal{C}$ will depend on what category you are working in (category of small/locally small/large categories) and <a href="https://ncatlab.org/nlab/show/large+category#properties" rel="noreferrer">foundational assumptions</a>. I'll assume local smallness for the remainder.</p> <p>As for the Yoneda lemma, it means that if $\mathcal{C}$ isn't small then the category $\mathbf{Set}^\mathcal{C^{op}}$ need not exist, therefore, the Yoneda embedding can't be presented as an arrow of that category or equivalently as an element of the hom-"set" of that category in this context. It may be a "large functor" or some other notion, but it's not an arrow in the category of locally small categories. We can still state and prove the Yoneda lemma, $\mathsf{Nat}(\mathsf{hom}(-,C), F) \cong FC$, it's just that $\mathsf{Nat}$ is not a hom-set between objects in a (locally small) category. We don't need to describe a natural transformation as an arrow in a functor category or a natural isomorphism as mutually inverse such arrows. We can directly formulate these notions (and then later prove that they do lead to categories sometimes). In general, $\mathsf{Nat}(F,G)$ will be a "proper class" and so part of proving the Yoneda lemma would be proving that $\mathsf{Nat}(\mathsf{hom}(-,C), F)$ is, in fact, a set.</p> <p>Nevertheless, it is often convenient to have this additional structure such as cartesian closure. I suspect someone has made a "conservativity" theorem that warrants proving facts about locally small categories by operating on them as large categories, and there are set-theoretic foundations that give large categories some of the properties we'd like.</p> <p>You may find <a href="http://www.brics.dk/RS/01/27/" rel="noreferrer">A Higher-Order Calculus for Categories</a> interesting.</p>
Using the digits $1-9$ each only once find the closest sum to $1000$.
2,093,910
7
https://math.stackexchange.com/questions/2093910/using-the-digits-1-9-each-only-once-find-the-closest-sum-to-1000
<p>Using the digits $1-9$ each only once find the closest sum to $1000$ by filling in the following spaces.</p> <p>$$_ _ _+_ _ _+_ _ _$$</p> <p>I tried to get $1000$ but couldn't (not even sure if you can). So then I tried $999$. I was able to get $152+378+469=999$ by guess and check but am not able to even to check if that is the correct answer. I believe there are $9!$ possible choices. I was wondering if there is some logical mathematical reasoning to get to the correct answer and prove it correct.</p>
[ { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" }, { "answer_id": 2093912, "score": 14, "text": "<p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p>\n" }, { "answer_id": 2093914, "score": 5, "text": "<p>One can prove that there is no way to sum up to $1000$:</p>\n\n<p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p>\n" }, { "answer_id": 2094038, "score": 3, "text": "<p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p>\n\n<p>First just write down any three numbers, even if they don't work, say</p>\n\n<pre><code> 147\n 258\n 369\n</code></pre>\n\n<p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p>\n\n<pre><code> 174\n 258\n 369\n</code></pre>\n\n<p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p>\n\n<p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p>\n\n<pre><code> 178\n 254\n 369\n</code></pre>\n\n<p>The total is still 801.</p>\n\n<p>Switching the 2 and the 4 works:</p>\n\n<pre><code> 178\n 452\n 369\n</code></pre>\n\n<p>Now the total is exactly 999 and we win.</p>\n\n<p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p>\n\n<pre><code> 378\n 452\n 169\n</code></pre>\n\n<p>And it's still 999.</p>\n\n<p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>\n" } ]
Question: Using the digits $1-9$ each only once find the closest sum to $1000$. Answer 1: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 2: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 3: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 4: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 5: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 6: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 7: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 8: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 9: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 10: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 11: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 12: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 13: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 14: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 15: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 16: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 17: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 18: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 19: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 20: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 21: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 22: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 23: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 24: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 25: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 26: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 27: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 28: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 29: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 30: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 31: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 32: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 33: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 34: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 35: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 36: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 37: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 38: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 39: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p> Answer 40: <p>By the divisibility by 9 rule (and these numbers add up to 45, i.e. divisible by 9), whatever sum you produce, the digits of the sum will always be divisible by 9, and hence 999 is the best you can do.</p> Answer 41: <p>One can prove that there is no way to sum up to $1000$:</p> <p>Using all digits only <em>once</em> to write three $3$-digit numbers will yield a sum that is congruent with $0$ modulus $3$; $1000$ is congruent with $1$ and $1001$ with $2$, so $999$ is the closest you can get. Guessing the final answer is just fine.</p> Answer 42: <p>There's a strategy you can use to find a solution that adds to 999, convert one solution into others, and once you understand how it works it's easy to see that 1000 is impossible. Using this method I found $128+375+496$ in a couple of minutes, and I can easily turn that into $176+395+428=999$ or $478+326+195=999$. </p> <p>First just write down any three numbers, even if they don't work, say</p> <pre><code> 147 258 369 </code></pre> <p>This adds to 774, which is too small. But what happens if we switch two digits in the same row? Say we switch the 4 and the 7. This adds 30 to the total (because the 4 in the tens' column becomes a 7) and takes away 3 (because the 7 in the units' column becomes a 4) for a net gain of 27. So the total now must be $774+27=801$:</p> <pre><code> 174 258 369 </code></pre> <p>The total is 801, so we want to increase it by around 200. So we should look for a digit in the hundreds' place that is 2 less than the digit in the units' place. When we switch these the total will increase by 200 and decrease by 2. None of the three rows does this. For example, if we were to switch the 2 and the 8 in the second row the total would increase by around 600 which is too much.</p> <p>But if 258 were 254 instead, then switching it to 452 would increase the total by about 200. And we can turn 258 into 254 by switching the 4 and the 8. This doesn't change the total at all, because 174 increases by 4 and 258 decreases by 4:</p> <pre><code> 178 254 369 </code></pre> <p>The total is still 801.</p> <p>Switching the 2 and the 4 works:</p> <pre><code> 178 452 369 </code></pre> <p>Now the total is exactly 999 and we win.</p> <p>If we want another solution, just switch any two digits in the same column, say the 3 and the 1:</p> <pre><code> 378 452 169 </code></pre> <p>And it's still 999.</p> <p>If you think a little more about what is happening here you will see why it's not possible to make 1000 exactly.</p>
finding $ \int^{4}_{0}(x^2+1)d(\lfloor x \rfloor),$ given $\lfloor x \rfloor $ is a floor function of $x$
2,094,459
7
https://math.stackexchange.com/questions/2094459/finding-int4-0x21d-lfloor-x-rfloor-given-lfloor-x-rfloor-i
<p>finding $\displaystyle \int^{4}_{0}(x^2+1)d(\lfloor x \rfloor),$ given $\lfloor x \rfloor $ is a floor function of $x$</p> <p>Assume $\displaystyle I = (x^2+1)\lfloor x \rfloor \bigg|^{4}_{0}-2\int^{4}_{0}x\lfloor x \rfloor dx$ ( integration by parts )</p> <p>i have a doubt about limit part , did not understand whether the limit corresponding to $x$</p> <p>or corrosponding to $\lfloor x \rfloor$</p> <p>because when we take $\displaystyle \int^{b}_{a}f(x)dx,$ then limits are corrosponding to $x$</p> <p>but when we take $\displaystyle \int^{b}_{a}f(x)d(\lfloor x \rfloor ),$ then limit corrosonding to $\lfloor x \rfloor$</p> <p>please clearfy my doubt and also explain me whats wrong with my method above , thanks </p>
[ { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" }, { "answer_id": 2222120, "score": 2, "text": "<p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the \"corresponding limits\", I advise you take the following steps:<br>\n1). Try to understand the basic theory of Riemann Stieljes integration.<br>\n2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.)\n<hr>\nA helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p>\n\n<p>1). From a measure theoretic perspective, we have to find the measure $\\mu$ on $\\Bbb R$ induced by $g(x)=\\lfloor x\\rfloor$, in a way such that $\\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e.\n$$\\int_A f(x)\\mathrm dg(x)=\\sum_{i\\in \\Bbb Z\\cap A} f(i).$$</p>\n\n<p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\\Delta x_i$ by $\\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>\n" } ]
Question: finding $ \int^{4}_{0}(x^2+1)d(\lfloor x \rfloor),$ given $\lfloor x \rfloor $ is a floor function of $x$ Answer 1: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 2: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 3: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 4: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 5: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 6: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 7: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 8: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 9: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 10: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 11: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 12: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 13: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p> Answer 14: <p>About your attempt to integrate by parts: instead of doubting about the legitimacy of the change of the "corresponding limits", I advise you take the following steps:<br> 1). Try to understand the basic theory of Riemann Stieljes integration.<br> 2). Find a proof of integration by parts for R-S integrals, be careful about the assumptions, and then try to fully understand the proof. (A quick search on this site gives many, for example.) <hr> A helpful answer would have to be based on the definition you adopt for this integral. Here I give two treatments, which should lead to the same result.</p> <p>1). From a measure theoretic perspective, we have to find the measure $\mu$ on $\Bbb R$ induced by $g(x)=\lfloor x\rfloor$, in a way such that $\mu((c,d])=g(d)-g(c)$ for $c&lt;d$. Clearly, we can find that $\mu$ is an atomic measure which assigns mass $1$ to each integral point in $\Bbb R$ and $0$ elsewhere, which basically says that you need only to care about integral points, to which you should assign mass $1$, over your integration domain, i.e. $$\int_A f(x)\mathrm dg(x)=\sum_{i\in \Bbb Z\cap A} f(i).$$</p> <p>2) Yet another simpler to understand treatment is the Riemann-Stieljes integral. For details of the computation, they bear much resemblance to computing a Riemann integral by definition, the only change being replace $\Delta x_i$ by $\Delta g_i$. For a fuller explanation, you should be able to check the definitions and relevant properties online (where resources should be plenty) by yourself.</p>
Product Identity Multiple Angle or $\sin(nx)=2^{n-1}\prod_{k=0}^{n-1}\sin\left(\frac{k\pi}n+x\right)$
2,095,330
7
https://math.stackexchange.com/questions/2095330/product-identity-multiple-angle-or-sinnx-2n-1-prod-k-0n-1-sin-left
<p>I have run across this interesting identity that I am unable to verify.</p> <p>$$\sin(nx)=2^{n-1}\prod_{k=0}^{n-1}\sin\left(\frac{k\pi}n+x\right)$$</p> <p>Can anyone provide a hint as how one would prove this? </p> <p>This identity was found on the Wolfram site<a href="http://mathworld.wolfram.com/Multiple-AngleFormulas.html" rel="noreferrer">1</a>.</p>
[ { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" }, { "answer_id": 2096321, "score": 12, "text": "<p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p>\n\n<p>$$\\sin(nx)=\\frac{e^{inx}-e^{-inx}}{2i}\\tag 1$$ </p>\n\n<p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \\pi/n}$ for $k=0,\\dots,n-1$, of the equation $z^n=1$ to write </p>\n\n<p>$$z^n-1=\\prod_{k=0}^{n-1}(z-e^{-i2\\pi k/n})\\tag 2$$</p>\n\n<p>Proceeding, we find </p>\n\n<p>$$\\begin{align}\n\\sin(nx)&amp;=\\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\left(e^{i2nx}-1\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\\pi k/n})\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i}\\prod_{k=0}^{n-1}\\left(\\color{blue}{e^{ix}}\\color{red}{e^{-i\\pi k/n}}\\color{green}{(2i)}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\right)\\\\\\\\\n&amp;=\\frac{e^{-inx}}{2i} \\color{blue}{e^{inx}} \\color{red}{e^{-i(\\pi/n)n(n-1)/2}}\\color{green}{(2i)^n}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\\\\\\\\\n&amp;=2^{n-1}\\prod_{k=0}^{n-1}\\sin\\left(x+\\frac{k \\pi}{n}\\right)\n\\end{align}$$</p>\n\n<p>as was to be shown!</p>\n" }, { "answer_id": 2095369, "score": 3, "text": "<p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href=\"https://en.wikipedia.org/wiki/Entire_function\" rel=\"nofollow noreferrer\">order <span class=\"math-container\">$1$</span></a>, having simple zeroes at every point of <span class=\"math-container\">$\\frac{\\pi}{n}\\mathbb{Z}$</span> and only there. Additionally, they both are <span class=\"math-container\">$\\frac{2\\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class=\"math-container\">$x\\not\\in\\frac{\\pi}{n}\\mathbb{Z}$</span>, like <span class=\"math-container\">$x=\\frac{\\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class=\"math-container\">$\\sin(x)=\\frac{e^{ix}-e^{-ix}}{2}$</span>.</p>\n<p>Since <span class=\"math-container\">$\\sin(\\pi z)=\\frac{\\pi}{\\Gamma(z)\\Gamma(1-z)}$</span>, you may also prove it through the <a href=\"https://en.wikipedia.org/wiki/Multiplication_theorem\" rel=\"nofollow noreferrer\">multiplication theorem</a> for the <span class=\"math-container\">$\\Gamma$</span> function.</p>\n" }, { "answer_id": 2095366, "score": 2, "text": "<p>The right side is odd and periodic with period $2\\pi/n$. Repeated use of the identities for $\\sin(a) \\sin(b)$ and $\\cos(a) \\sin(b)$ must give you a sum of terms of the form $a_j \\sin(j x)$ and $b_j \\cos(j x)$ with $0 \\le j \\le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\\cos$ terms cancel, leaving a multiple of $\\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>\n" } ]
Question: Product Identity Multiple Angle or $\sin(nx)=2^{n-1}\prod_{k=0}^{n-1}\sin\left(\frac{k\pi}n+x\right)$ Answer 1: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 2: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 3: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 4: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 5: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 6: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 7: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 8: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 9: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 10: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 11: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 12: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 13: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 14: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 15: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 16: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 17: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 18: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 19: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 20: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 21: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 22: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 23: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 24: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 25: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 26: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 27: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 28: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 29: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 30: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 31: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 32: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 33: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 34: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 35: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 36: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 37: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 38: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 39: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p> Answer 40: <p>This solution herein is inspired by a comment left by @user1952009 on another posted solution. That solution is elegant in that it relies on only $(i)$ Euler's formula to write </p> <p>$$\sin(nx)=\frac{e^{inx}-e^{-inx}}{2i}\tag 1$$ </p> <p>and $(ii)$ the $n$ roots of unity, $z=e^{-i2k \pi/n}$ for $k=0,\dots,n-1$, of the equation $z^n=1$ to write </p> <p>$$z^n-1=\prod_{k=0}^{n-1}(z-e^{-i2\pi k/n})\tag 2$$</p> <p>Proceeding, we find </p> <p>$$\begin{align} \sin(nx)&amp;=\frac{e^{inx}-e^{-inx}}{2i}&amp;\\\\ &amp;=\frac{e^{-inx}}{2i}\left(e^{i2nx}-1\right)\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}(e^{i2x}-e^{-i2\pi k/n})\\\\ &amp;=\frac{e^{-inx}}{2i}\prod_{k=0}^{n-1}\left(\color{blue}{e^{ix}}\color{red}{e^{-i\pi k/n}}\color{green}{(2i)}\sin\left(x+\frac{k \pi}{n}\right)\right)\\\\ &amp;=\frac{e^{-inx}}{2i} \color{blue}{e^{inx}} \color{red}{e^{-i(\pi/n)n(n-1)/2}}\color{green}{(2i)^n}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right)\\\\ &amp;=2^{n-1}\prod_{k=0}^{n-1}\sin\left(x+\frac{k \pi}{n}\right) \end{align}$$</p> <p>as was to be shown!</p> Answer 41: <p>You may prove such identity through Herglotz trick, for instance. Both the RHS and the LHS are entire functions in the complex plane of <a href="https://en.wikipedia.org/wiki/Entire_function" rel="nofollow noreferrer">order <span class="math-container">$1$</span></a>, having simple zeroes at every point of <span class="math-container">$\frac{\pi}{n}\mathbb{Z}$</span> and only there. Additionally, they both are <span class="math-container">$\frac{2\pi}{n}$</span>-periodic functions. It follows that such identity holds as soon as it holds for some <span class="math-container">$x\not\in\frac{\pi}{n}\mathbb{Z}$</span>, like <span class="math-container">$x=\frac{\pi}{2n}$</span>. In such a case, it is straightforward to prove through De Moivre's identity <span class="math-container">$\sin(x)=\frac{e^{ix}-e^{-ix}}{2}$</span>.</p> <p>Since <span class="math-container">$\sin(\pi z)=\frac{\pi}{\Gamma(z)\Gamma(1-z)}$</span>, you may also prove it through the <a href="https://en.wikipedia.org/wiki/Multiplication_theorem" rel="nofollow noreferrer">multiplication theorem</a> for the <span class="math-container">$\Gamma$</span> function.</p> Answer 42: <p>The right side is odd and periodic with period $2\pi/n$. Repeated use of the identities for $\sin(a) \sin(b)$ and $\cos(a) \sin(b)$ must give you a sum of terms of the form $a_j \sin(j x)$ and $b_j \cos(j x)$ with $0 \le j \le n$, but the oddness and periodicity implies that the terms for $j &lt; n$ and the $\cos$ terms cancel, leaving a multiple of $\sin(nx)$. Now all you have to do is show that the constant is $1$...</p>
Analytic Functions - why 2 dimensions?
2,095,894
7
https://math.stackexchange.com/questions/2095894/analytic-functions-why-2-dimensions
<p>Having just learned the definition of analytic functions, I found it surprising and somewhat counterintuitive that the set of functions over the complex plane whose real and imaginary parts each satisfy Laplace's Equation $\nabla^{2} = 0$ should have such incredible significance, since this is basically a set of two harmonic functions over $\mathbb{R}$. Does this extend to higher dimensions of hypercomplex numbers such as quaternions?</p> <p>More formally, is there some generalization of the space of analytic functions to higher dimensions $ 2^n | n \in \mathbb{N}, n &gt; 1$ where each component function individually satisfies Laplace's Equation? Do these functional spaces yield similar properties to that enumerated in the field of complex analysis, or are there any salient differences?</p>
[ { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" }, { "answer_id": 2097745, "score": 4, "text": "<p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href=\"http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf\" rel=\"nofollow noreferrer\">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R.\nFueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p>\n" }, { "answer_id": 2096269, "score": 3, "text": "<p>In the book <a href=\"http://press.princeton.edu/titles/1978.html\" rel=\"nofollow noreferrer\">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a \"generalized analytic function\" is defined as a function $u=(u_1,\\dots,u_n)\\colon D\\subset\\mathbb{R}^n\\to\\mathbb{R}^n$ such that\n$$\n\\sum_{i=1}^n\\frac{\\partial u_i}{\\partial x_i}=0,\\quad \\frac{\\partial u_i}{\\partial x_j}=\\frac{\\partial u_j}{\\partial x_i},\\quad1\\le i,j\\le n.\n$$\nThis is equivalent to\n$$\n\\operatorname{div}u=0,\\quad \\operatorname{curl}u=0.\n$$</p>\n" }, { "answer_id": 2097781, "score": 3, "text": "<p>It is also amazing that domains in $\\mathbb R^2$ have so many conformal maps, but domains in $\\mathbb R^n$ for $n \\ge 3$ have (by comparison) hardly any.</p>\n" } ]
Question: Analytic Functions - why 2 dimensions? Answer 1: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 2: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 3: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 4: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 5: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 6: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 7: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 8: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 9: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 10: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 11: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 12: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 13: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 14: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 15: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 16: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 17: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 18: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 19: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 20: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 21: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 22: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 23: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 24: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 25: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 26: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 27: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 28: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 29: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 30: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 31: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 32: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 33: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 34: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 35: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 36: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 37: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 38: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 39: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p> Answer 40: <p>In quaternion analysis the notions of analyticity, holomorphy, and harmonicity are not linked as for complex analysis as you can see in this presentation of <a href="http://www.theworld.com/~sweetser/quaternions/ps/Quaternionic-analysis.pdf" rel="nofollow noreferrer">quaternionic analysis</a> whre it's presented the definition of a ''regular'' quaternionic function proposed by R. Fueter ( in 1935), characterized by a property analogue to the Cauchy-Riemann equations. Obviously many difficults in the extension of calculus to quaternions come from non commutativity, that is a characteristic of any extension of the field of complex numbers.</p> Answer 41: <p>In the book <a href="http://press.princeton.edu/titles/1978.html" rel="nofollow noreferrer">Introduction to Fourier Analysis on Euclidean Spaces</a>, by E.M. Stein and G. Weiss, a "generalized analytic function" is defined as a function $u=(u_1,\dots,u_n)\colon D\subset\mathbb{R}^n\to\mathbb{R}^n$ such that $$ \sum_{i=1}^n\frac{\partial u_i}{\partial x_i}=0,\quad \frac{\partial u_i}{\partial x_j}=\frac{\partial u_j}{\partial x_i},\quad1\le i,j\le n. $$ This is equivalent to $$ \operatorname{div}u=0,\quad \operatorname{curl}u=0. $$</p> Answer 42: <p>It is also amazing that domains in $\mathbb R^2$ have so many conformal maps, but domains in $\mathbb R^n$ for $n \ge 3$ have (by comparison) hardly any.</p>
What is the cartesian product of cartesian products?
2,096,073
7
https://math.stackexchange.com/questions/2096073/what-is-the-cartesian-product-of-cartesian-products
<p>If I have four sets <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span> and <span class="math-container">$D$</span>, what is the meaning of <span class="math-container">$(A \times B) \times (C \times D)$</span>? <br>Can I define its element to be a matrix, such that the first index of each element of the matrix is given by its order in the bigger cartesian product i.e.<span class="math-container">$(A \times B) \times (C \times D)$</span>, and the second index is given by its order within the smaller cartesian product i.e. <span class="math-container">$(A \times B)$</span> or <span class="math-container">$(C \times D)$</span>?</p>
[ { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" }, { "answer_id": 2096086, "score": 16, "text": "<p>By definition $(A\\times B)\\times(C\\times D)$ is the set of all ordered pairs of ordered pairs of the form</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. You’ve described the set of $2\\times 2$ matrices of the form</p>\n\n<p>$$\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>where $a\\in A,b\\in B,c\\in C$, and $d\\in D$. There is a natural bijection between these two sets given by</p>\n\n<p>$$\\big\\langle\\langle a,b\\rangle,\\langle c,d\\rangle\\big\\rangle\\mapsto\\begin{bmatrix}a&amp;b\\\\c&amp;d\\end{bmatrix}\\;,$$</p>\n\n<p>but the two sets are not equal, simply because a $2\\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p>\n" }, { "answer_id": 2096082, "score": 5, "text": "<p>Cartesian product of cartesian products is a set of tuples of tuples.\n$$(A \\times B) \\times (C \\times D) = \\left\\{((a,b),(c,d))|a\\in A, b\\in B, c\\in C, d\\in D\\right\\}$$</p>\n" }, { "answer_id": 2096083, "score": 5, "text": "<p>$(A \\times B)\\times (C \\times D))=\\{(u,v): u \\in A \\times B, v \\in C \\times D\\}=\\{((a,b),(c,d)):a \\in A, b \\in B, c \\in C, d \\in D\\}$.</p>\n" }, { "answer_id": 2096085, "score": 4, "text": "<p>The meaning of $(A\\times B)\\times(C\\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p>\n\n<p>An example would be this: if $A=B=C=D=\\mathbb{R}$, then $(A\\times B)\\times (C\\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p>\n\n<p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\\times B)\\times(C \\times D))$ with $A \\times B \\times C \\times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>\n" } ]
Question: What is the cartesian product of cartesian products? Answer 1: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 2: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 3: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 4: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 5: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 6: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 7: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 8: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 9: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 10: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 11: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 12: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 13: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 14: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 15: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 16: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 17: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 18: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 19: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 20: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 21: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 22: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 23: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 24: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 25: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 26: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 27: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 28: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 29: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 30: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 31: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 32: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 33: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 34: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 35: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 36: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 37: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 38: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 39: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 40: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 41: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 42: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 43: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 44: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 45: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 46: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 47: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 48: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 49: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 50: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 51: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 52: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p> Answer 53: <p>By definition $(A\times B)\times(C\times D)$ is the set of all ordered pairs of ordered pairs of the form</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. You’ve described the set of $2\times 2$ matrices of the form</p> <p>$$\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>where $a\in A,b\in B,c\in C$, and $d\in D$. There is a natural bijection between these two sets given by</p> <p>$$\big\langle\langle a,b\rangle,\langle c,d\rangle\big\rangle\mapsto\begin{bmatrix}a&amp;b\\c&amp;d\end{bmatrix}\;,$$</p> <p>but the two sets are not equal, simply because a $2\times 2$ matrix is not an ordered pair of ordered pairs: the set of matrices is not the same thing as the Cartesian product.</p> Answer 54: <p>Cartesian product of cartesian products is a set of tuples of tuples. $$(A \times B) \times (C \times D) = \left\{((a,b),(c,d))|a\in A, b\in B, c\in C, d\in D\right\}$$</p> Answer 55: <p>$(A \times B)\times (C \times D))=\{(u,v): u \in A \times B, v \in C \times D\}=\{((a,b),(c,d)):a \in A, b \in B, c \in C, d \in D\}$.</p> Answer 56: <p>The meaning of $(A\times B)\times(C\times D)$ is that of a set of ordered pairs of ... ordered pairs, the first pair with the first element from $A$ and the second from $B$, and the second pair with the first element from $C$ and the second from $D$.</p> <p>An example would be this: if $A=B=C=D=\mathbb{R}$, then $(A\times B)\times (C\times D)$ can represent the set of all segments on the cartesian plane, each segment being represented as a pair of points, with each point in turn represented as a pair of coordinates: $(x_1,y_1),(x_2,y_2)$.</p> <p>You can certainly represent it as a matrix as long as all four sets are at most countably infinite - so not in the segments example above! What's really important is not to confuse $((A\times B)\times(C \times D))$ with $A \times B \times C \times D$; the former is a set of pairs of pairs, the second is a set of $4$-uples.</p>
Probability of picking a uniquely colored ball successively without replacement
2,096,592
7
https://math.stackexchange.com/questions/2096592/probability-of-picking-a-uniquely-colored-ball-successively-without-replacement
<p>In a bucket, there are five different colors of balls, two of each color, making 10 in total. If you pick three balls at random without replacement, what is the probability that you pick a different colored ball each time? What is the probability that you only pick two different colored balls?</p> <p>I just made up a simple random scenario that demonstrates the principle/type of problem that I am trying to figure out. With the skills I acquired in pre-calculus, I would be able to solve this if it was with replacement, not without replacement. Also having several unique colors, but identical balls within each color group adds an extra layer of complexity that confuses me. What is the process to solve the problem? And, if I was to increase the number of colors, number of balls per color, or the number of times a ball is picked, could the same formula/process still be applied?</p>
[ { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" }, { "answer_id": 2096606, "score": 6, "text": "<p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p>\n\n<p>There are $_{10}C_3$ ways to pick three balls out.</p>\n\n<p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\\cdot (_5C_3) / (_{10}C_3)$.</p>\n\n<p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \\cdot 8 / (_{10}C_3).$</p>\n\n<p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p>\n" }, { "answer_id": 2096607, "score": 5, "text": "<p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p>\n\n<p>We have $5$ colors : $C_1, C_2,..,C_5$</p>\n\n<p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\\binom{10}{3}$ possibles scenarios.</p>\n\n<p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a \"good\" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p>\n\n<p>The total is then $\\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\\frac{\\binom{5}{3}*2^3}{\\binom{10}{3}}$$</p>\n" }, { "answer_id": 2096643, "score": 3, "text": "<p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\\cdot\\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\\frac{8}{9}\\cdot\\frac{6}{8}=\\frac{6}{9}=\\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\\frac{2}{3}=\\frac{1}{3}$. We can also compute it directly: $\\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\\frac{8}{9}\\cdot\\frac{2}{8}=\\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\\frac{1}{9}+\\frac{2}{9}=\\frac{3}{9}=\\frac{1}{3}$.</p>\n\n<p>I think this reasoning is much simpler than other answers.</p>\n" }, { "answer_id": 2096645, "score": 3, "text": "<p>Canardi's formula can be easily generalized. Assume:</p>\n\n<p>$x$: number of colors</p>\n\n<p>$y$: number of balls per color</p>\n\n<p>$z$: number of picks</p>\n\n<p>$P$: chance of all $z$ picks having a different color </p>\n\n<p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \\leq x$):</p>\n\n<p>$$P = \\frac{\\binom{x}{z}*y^z}{\\binom{x*y}{z}}$$</p>\n" } ]
Question: Probability of picking a uniquely colored ball successively without replacement Answer 1: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 2: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 3: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 4: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 5: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 6: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 7: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 8: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 9: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 10: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 11: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 12: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 13: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 14: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 15: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 16: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 17: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 18: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 19: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 20: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 21: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 22: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 23: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 24: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 25: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 26: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 27: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 28: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 29: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 30: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 31: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 32: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 33: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 34: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 35: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 36: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 37: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 38: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 39: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 40: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 41: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 42: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 43: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 44: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 45: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 46: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 47: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 48: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 49: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 50: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 51: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 52: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p> Answer 53: <p>If you're picking the balls without replacement, then you can treat this problem with combinations (since it doesn't matter in which order you pick the balls).</p> <p>There are $_{10}C_3$ ways to pick three balls out.</p> <p>To pick three different colors, choose the colors ($_5C_3$) and then choose which ball of each color ($8$). This gives probability $8\cdot (_5C_3) / (_{10}C_3)$.</p> <p>To pick two different colors, pick the color with two balls ($5$) and pick the other ball ($8$). So, probability $5 \cdot 8 / (_{10}C_3).$</p> <p>This method works for any number of colors and any number of balls per color, <em>as long as</em> the number of balls per color is the same. If you allow for different numbers of balls of particular colors then you'll need to treat those cases differently.</p> Answer 54: <p>I do not think there is a cooking formula for everything :It is all about counting. First, we count the possible scenarios, and then the ones in our favor.</p> <p>We have $5$ colors : $C_1, C_2,..,C_5$</p> <p>We pick $3$ balls, how many ways are they to pick $3$ balls out of $10$ ? $\binom{10}{3}$ possibles scenarios.</p> <p>Now, here, we count the scenarios in our favor, that is $3$ different colors. First, we must choose $3$ colors in order to create a "good" scenario, for example : one red, one blue , one yellow. Here we have $5$ colors, and there are $\binom{5}{3}$ ways to choose three colors. For each color, we have two choices of balls, that is $2^3$ ways to pick 3 balls among the three chosen colors.</p> <p>The total is then $\binom{5}{3}*2^3$ , and the probability is the ratio between the number of good scenarios and the number of possible scenarios $$\frac{\binom{5}{3}*2^3}{\binom{10}{3}}$$</p> Answer 55: <p>Just bear in mind “favorable cases divided by all cases” and you won't be confused. In your example, let's compute the probability of picking three different colors. every ball is good at the first pick, so the probability to be on the right track is $\frac{10}{10}=1$ (certainty). At the second pick, 8 balls are good in overall 9 balls, so the probability to be on the right track becomes $1\cdot\frac{8}{9}$. At the third pick, you have 6 good balls in 8, so the probability of picking three differently colored balls is $\frac{8}{9}\cdot\frac{6}{8}=\frac{6}{9}=\frac{2}{3}$. The other possible case, that is, to have both balls of a color, is then $1-\frac{2}{3}=\frac{1}{3}$. We can also compute it directly: $\frac{1}{9}$ (probability of picking two balls of the same color in the first two picks) and $\frac{8}{9}\cdot\frac{2}{8}=\frac{2}{9}$ (probability of picking two different colors in the first two picks, and a repetition at the third) is $\frac{1}{9}+\frac{2}{9}=\frac{3}{9}=\frac{1}{3}$.</p> <p>I think this reasoning is much simpler than other answers.</p> Answer 56: <p>Canardi's formula can be easily generalized. Assume:</p> <p>$x$: number of colors</p> <p>$y$: number of balls per color</p> <p>$z$: number of picks</p> <p>$P$: chance of all $z$ picks having a different color </p> <p>Now, if $z &gt; x$, then obviously $P = 0$. Otherwise (i.e. if $z \leq x$):</p> <p>$$P = \frac{\binom{x}{z}*y^z}{\binom{x*y}{z}}$$</p>
The sum of three primes is 100, one of them exceeds the other by 36, find the largest one
2,097,486
7
https://math.stackexchange.com/questions/2097486/the-sum-of-three-primes-is-100-one-of-them-exceeds-the-other-by-36-find-the-la
<blockquote> <p>The sum of $3$ primes numbers is $100$. One of them exceeds the other by $36$. Find the largest one.</p> </blockquote> <p>My attempts:</p> <p>Let $p_1 +p_2 + p_3=100$, also let $p_2=p_1 +36 \implies 2p_1 +p_3=64$, from <a href="https://math.stackexchange.com/questions/148924/sum-of-three-primes">here</a> $p_1 \neq p_2 \neq2 \implies p_3=2$, hence $p_1 =31 \implies p_2=67$.</p> <p>Is this make sense, I don't have an answer, please add your answer.</p>
[ { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" }, { "answer_id": 2097730, "score": 3, "text": "<p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p>\n\n<blockquote>\n <p>$$2,31,67$$</p>\n</blockquote>\n" } ]
Question: The sum of three primes is 100, one of them exceeds the other by 36, find the largest one Answer 1: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 2: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 3: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 4: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 5: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 6: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 7: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 8: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 9: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 10: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 11: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 12: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 13: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote> Answer 14: <p>As we know that the sum of three odd numbers cannot be even i.e. one of the prime is even or $2$ let one of the other prime is $p_1$ then the third prime is $p_1+36$ now according to question $$2+p_1+p_1+36=100$$ $$2p_1=62$$ $$p_1=31$$ Hence, the primes are </p> <blockquote> <p>$$2,31,67$$</p> </blockquote>
What is the exact meaning of the following sentence?
2,097,800
7
https://math.stackexchange.com/questions/2097800/what-is-the-exact-meaning-of-the-following-sentence
<p>I have read the following sentence in my book.</p> <blockquote> <p>Let $\Phi$ be a convex function which is finite on interval $I$ containing the range of the function $g$ and infinite elsewhere, and let $\phi$ be an arbitrary determination of its derivative (i.e. having at a "corner" any value between the left and right derivatives), defined and finite on $I$.</p> </blockquote> <p>What is the meaning of "<em>having at a</em> 'corner' <em>any value between the left and right derivatives</em>"?</p>
[ { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" }, { "answer_id": 2097814, "score": 10, "text": "<p>The only way a convex function can fail to be differentiable at a point is to have a \"corner\" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p>\n" }, { "answer_id": 2097812, "score": 4, "text": "<p>A \"corner\" occurs if the derivate from the left differs from the derivate from the right. </p>\n\n<p>At such positions, the function is not differentiable.</p>\n\n<p>A simple example is $f(x)=|x|$ for $x_0=0$. </p>\n\n<p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p>\n\n<p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p>\n\n<p>So, the limit for $x\\rightarrow 0$ does not exist.</p>\n" }, { "answer_id": 2097813, "score": 2, "text": "<p>The convex function $\\Phi$ has right and left derivatives $\\Phi'_+(x)$ and $\\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\\Phi'_+$ and $\\Phi'_-$ are non-decreasing, $\\Phi'_+(x-)=\\Phi'_-(x)$ and $\\Phi'_-(x+)=\\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\\Phi'_-(x)\\le\\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\\Phi'_+$ has a jump discontinuity. A \"corner\" refers to an $x$ with $\\Phi'_-(x)&lt;\\Phi'_+(x)$. It seems that the author of your book would have you take $\\phi(x)=\\Phi'_+(x)$ if $\\Phi'_+(x)=\\Phi'_-(x)$, and $\\phi(x)$ any point of $[\\Phi'_-(x),\\Phi'_+(x)]$ if $\\Phi'_-(x)&lt;\\Phi'_+(x)$.</p>\n" }, { "answer_id": 2097816, "score": 1, "text": "<p>The statement that $\\phi$ is \"an arbitrary determination of its derivative\" means that if $x \\in I$, then $\\phi(x) \\in \\partial \\Phi(x)$, where $\\partial \\Phi(x)$ is the subdifferential of $\\Phi$ at $x$.</p>\n\n<p>Edit:</p>\n\n<p>I should have explained what \"subdifferential\" means. If a convex function $f:\\mathbb R^n \\to \\mathbb R \\cup \\{\\infty\\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \\geq f(x_0) + \\langle \\nabla f(x_0),x - x_0 \\rangle$ for all $x \\in \\mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that\n$$\n\\tag{$\\spadesuit$} f(x) \\geq f(x_0) + \\langle g, x - x_0 \\rangle \\quad \\text{for all } x \\in \\mathbb R^n.\n$$\nA vector $g$ which has this property is called a \"subgradient\" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p>\n\n<p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\\partial f(x_0)$. For example, let $f:\\mathbb R \\to \\mathbb R$ such that $f(x) = | x |$ for all $x \\in \\mathbb R$. Then $\\partial f(0) = [-1,1]$.</p>\n\n<p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\\partial f(x_0)$ is a singleton. In this case, $\\partial f(x_0) = \\{ \\nabla f(x_0) \\}$. </p>\n" }, { "answer_id": 2097821, "score": 1, "text": "<p>The right and left derivatives of $\\Phi$ at a point $a$ in the domain of $\\Phi$ are respectively\n\\begin{align}\n&amp; \\lim_{\\Delta x\\,\\downarrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x} \\\\[10pt]\n\\text{and } &amp; \\lim_{\\Delta x\\,\\uparrow\\,a} \\frac{ \\Phi(a+\\Delta x) - \\Phi(a) }{\\Delta x}\n\\end{align}\nWith a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a \"corner\". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>\n" } ]
Question: What is the exact meaning of the following sentence? Answer 1: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 2: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 3: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 4: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 5: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 6: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 7: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 8: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 9: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 10: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 11: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 12: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 13: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 14: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 15: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 16: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 17: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 18: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 19: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 20: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 21: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 22: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 23: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 24: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 25: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 26: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 27: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 28: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 29: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 30: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 31: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 32: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 33: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 34: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 35: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 36: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 37: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 38: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 39: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 40: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 41: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 42: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 43: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 44: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 45: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 46: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 47: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 48: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 49: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 50: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 51: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 52: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 53: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 54: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 55: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 56: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 57: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 58: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 59: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 60: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 61: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 62: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 63: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 64: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 65: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p> Answer 66: <p>The only way a convex function can fail to be differentiable at a point is to have a "corner" like the one for $|x|$ at $0$. The derivative of that function is $-1$ for $x &lt; 0$ and $+1$ for $x &gt; 0$. Those are the values of the left and right derivatives at $0$. Since they are different there is no derivative at $0$. The quotation says that for the purposes it has in mind you can choose any value between $-1$ and $+1$ for the derivative at $0$.</p> Answer 67: <p>A "corner" occurs if the derivate from the left differs from the derivate from the right. </p> <p>At such positions, the function is not differentiable.</p> <p>A simple example is $f(x)=|x|$ for $x_0=0$. </p> <p>For $x&gt;0$ , we have $f(x)=x$, hence $f'(x)=1$</p> <p>For $x&lt;0$ , we have $f(x)=-x$, hence $f'(x)=-1$</p> <p>So, the limit for $x\rightarrow 0$ does not exist.</p> Answer 68: <p>The convex function $\Phi$ has right and left derivatives $\Phi'_+(x)$ and $\Phi'_-(x)$ at each point $x$ in the interior of $I$. Both $\Phi'_+$ and $\Phi'_-$ are non-decreasing, $\Phi'_+(x-)=\Phi'_-(x)$ and $\Phi'_-(x+)=\Phi'_+(x)$ for all $x$ in the interior of $I$. It follows that $\Phi'_-(x)\le\Phi'_+(x)$ with equality except on the countable set of $x$ values where $\Phi'_+$ has a jump discontinuity. A "corner" refers to an $x$ with $\Phi'_-(x)&lt;\Phi'_+(x)$. It seems that the author of your book would have you take $\phi(x)=\Phi'_+(x)$ if $\Phi'_+(x)=\Phi'_-(x)$, and $\phi(x)$ any point of $[\Phi'_-(x),\Phi'_+(x)]$ if $\Phi'_-(x)&lt;\Phi'_+(x)$.</p> Answer 69: <p>The statement that $\phi$ is "an arbitrary determination of its derivative" means that if $x \in I$, then $\phi(x) \in \partial \Phi(x)$, where $\partial \Phi(x)$ is the subdifferential of $\Phi$ at $x$.</p> <p>Edit:</p> <p>I should have explained what "subdifferential" means. If a convex function $f:\mathbb R^n \to \mathbb R \cup \{\infty\}$ is finite and differentiable at $x_0$, then it can be shown that $f(x) \geq f(x_0) + \langle \nabla f(x_0),x - x_0 \rangle$ for all $x \in \mathbb R^n$. Even if $f$ is not differentiable at $x_0$, usually there exist vectors $g$ such that $$ \tag{$\spadesuit$} f(x) \geq f(x_0) + \langle g, x - x_0 \rangle \quad \text{for all } x \in \mathbb R^n. $$ A vector $g$ which has this property is called a "subgradient" of $f$ at $x_0$. So even if $f$ does not have a gradient at $x_0$, at least $f$ has subgradients at $x_0$, and that is better than nothing.</p> <p>The set of all subgradients of $f$ at $x_0$ is called the subdifferential of $f$ at $x_0$, and is denoted $\partial f(x_0)$. For example, let $f:\mathbb R \to \mathbb R$ such that $f(x) = | x |$ for all $x \in \mathbb R$. Then $\partial f(0) = [-1,1]$.</p> <p>It can be shown that, under mild assumptions, a convex function $f$ is differentiable at $x_0$ if and only if $\partial f(x_0)$ is a singleton. In this case, $\partial f(x_0) = \{ \nabla f(x_0) \}$. </p> Answer 70: <p>The right and left derivatives of $\Phi$ at a point $a$ in the domain of $\Phi$ are respectively \begin{align} &amp; \lim_{\Delta x\,\downarrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \\[10pt] \text{and } &amp; \lim_{\Delta x\,\uparrow\,a} \frac{ \Phi(a+\Delta x) - \Phi(a) }{\Delta x} \end{align} With a convex function, the left derivative at a point is always less than or equal to the right derivative. At most points they are equal. A point where they are not equal is a "corner". If there is a corner at which the left derivative is $3$ and the right derivative is $4$, then they're assigning to $\varphi$ at that point some arbitrary value not less than $3$ and not greater than $4$.</p>
Proving that $\cos(\arcsin(x))=\sqrt{1-x^2}$
2,098,163
7
https://math.stackexchange.com/questions/2098163/proving-that-cos-arcsinx-sqrt1-x2
<p>I am asked to prove that $\cos(\arcsin(x)) = \sqrt{1-x^2}$</p> <p>I have used the trig identity to show that $\cos^2(x) = 1 - x^2$</p> <p>Therefore why isn't the answer denoted with the plus-or-minus sign?</p> <p>as in $\pm \sqrt{1-x^2}$.</p> <p>Thank you!</p>
[ { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" }, { "answer_id": 2098164, "score": 4, "text": "<p>Hope this image helps (pythagorean identity?):</p>\n\n<p><a href=\"https://i.stack.imgur.com/M9cq2.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/M9cq2.png\" alt=\"enter image description here\"></a></p>\n\n<p>And as to the $\\pm$, it simply because $\\cos(\\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p>\n" }, { "answer_id": 2098257, "score": 3, "text": "<p>Let $\\arcsin x = \\theta$. Then, by definition of the arcsine function, $\\sin\\theta = x$, where $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, and \n$\\cos(\\arcsin x) = \\cos\\theta$. Using the Pythagorean Identity $\\sin^2\\theta + \\cos^2\\theta = 1$, we obtain \n\\begin{align*}\n\\sin^2\\theta + \\cos^2\\theta &amp; = 1\\\\\n\\cos^2\\theta &amp; = 1 - \\sin^2\\theta\\\\\n\\cos^2\\theta &amp; = 1 - x^2\\\\\n|\\cos\\theta| &amp; = \\sqrt{1 - x^2}\n\\end{align*}\nSince $-\\frac{\\pi}{2} \\leq \\theta \\leq \\frac{\\pi}{2}$, $\\cos\\theta \\geq 0$. Thus, $|\\cos\\theta| = \\cos\\theta$, whence\n\\begin{align*}\n\\cos\\theta &amp; = \\sqrt{1 - x^2}\\\\\n\\cos(\\arcsin x) &amp; = \\sqrt{1 - x^2}\n\\end{align*}</p>\n" }, { "answer_id": 2098358, "score": 2, "text": "<p>Note: this answer does not explain the formula $\\sqrt{1-x^2}$, but does address why you don't need $\\pm$.</p>\n\n<p>Let $x$ be any number between $-1$ and $1$. Then $\\arcsin(x)$ is an angle $\\theta$ with $\\sin(\\theta)=x$. But <em>which</em> angle $\\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\\pi/2$ and $\\pi/2$ radians).</p>\n\n<p>All right, now we want $\\cos(\\arcsin(x))$, or $\\cos(\\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\\cos(\\arcsin(x))$ is <strong>always non-negative</strong>.</p>\n" } ]
Question: Proving that $\cos(\arcsin(x))=\sqrt{1-x^2}$ Answer 1: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 2: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 3: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 4: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 5: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 6: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 7: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 8: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 9: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 10: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 11: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 12: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 13: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 14: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 15: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 16: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 17: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 18: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 19: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 20: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 21: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 22: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 23: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 24: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 25: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 26: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 27: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 28: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 29: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 30: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 31: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 32: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 33: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 34: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 35: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 36: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 37: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 38: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 39: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p> Answer 40: <p>Hope this image helps (pythagorean identity?):</p> <p><a href="https://i.stack.imgur.com/M9cq2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M9cq2.png" alt="enter image description here"></a></p> <p>And as to the $\pm$, it simply because $\cos(\arcsin(x))$ is only equal to one value, and this value can be only $+$ or $-$, but not both (so we just choose, and $+$ is just more logical).</p> Answer 41: <p>Let $\arcsin x = \theta$. Then, by definition of the arcsine function, $\sin\theta = x$, where $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $\cos(\arcsin x) = \cos\theta$. Using the Pythagorean Identity $\sin^2\theta + \cos^2\theta = 1$, we obtain \begin{align*} \sin^2\theta + \cos^2\theta &amp; = 1\\ \cos^2\theta &amp; = 1 - \sin^2\theta\\ \cos^2\theta &amp; = 1 - x^2\\ |\cos\theta| &amp; = \sqrt{1 - x^2} \end{align*} Since $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, $\cos\theta \geq 0$. Thus, $|\cos\theta| = \cos\theta$, whence \begin{align*} \cos\theta &amp; = \sqrt{1 - x^2}\\ \cos(\arcsin x) &amp; = \sqrt{1 - x^2} \end{align*}</p> Answer 42: <p>Note: this answer does not explain the formula $\sqrt{1-x^2}$, but does address why you don't need $\pm$.</p> <p>Let $x$ be any number between $-1$ and $1$. Then $\arcsin(x)$ is an angle $\theta$ with $\sin(\theta)=x$. But <em>which</em> angle $\theta$? There are lots of different angles that all have the same sine. <strong>By definition</strong>, $\arcsin(x)$ is an angle between $-90$° and $90$° (or, if you prefer, between $-\pi/2$ and $\pi/2$ radians).</p> <p>All right, now we want $\cos(\arcsin(x))$, or $\cos(\theta)$. That means we are taking the cosine of an angle between $-90$° and $90$°. The cosine of such an angle is never negative (we only get a negative cosine from an obtuse angle). So $\cos(\arcsin(x))$ is <strong>always non-negative</strong>.</p>
Generating Random Orthogonal Matrices
2,098,897
7
https://math.stackexchange.com/questions/2098897/generating-random-orthogonal-matrices
<p>If I generate a random matrix A with every element sampled from a uniform [0, 1) distribution, and then use the Gram Schmidt procedure to get an orthogonal matrix Q. </p> <p>Will this generate every orthogonal matrix with elements within some interval on the real line? and will it do so with equal probability for each matrix?</p> <p>If not, how could I generate a random orthogonal matrix, preferably using python? Thank you.</p>
[ { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" }, { "answer_id": 2100412, "score": 10, "text": "<p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p>\n\n<p>See <a href=\"https://arxiv.org/abs/math-ph/0609050\" rel=\"noreferrer\">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p>\n" }, { "answer_id": 3716218, "score": 8, "text": "<p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p>\n\n<p><a href=\"https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html\" rel=\"noreferrer\">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p>\n\n<p>For example:</p>\n\n<pre><code>import numpy as np\nfrom scipy.stats import ortho_group\n\nm = ortho_group.rvs(dim=5)\nmm = np.matmul(m.T, m)\nprint(m)\nprint(mm)\n</code></pre>\n\n<p>Output would be:</p>\n\n<pre><code>&gt;&gt;&gt; print(m)\n[[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194]\n [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728]\n [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308]\n [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778]\n [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]]\n\n&gt;&gt;&gt; print(mm)\n[[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17\n -1.36638999e-16]\n [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17\n 8.23085667e-17]\n [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16\n 2.22057189e-16]\n [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00\n 6.48125595e-17]\n [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17\n 1.00000000e+00]]\n</code></pre>\n" } ]
Question: Generating Random Orthogonal Matrices Answer 1: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 2: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 3: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 4: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 5: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 6: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 7: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 8: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 9: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 10: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 11: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 12: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 13: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 14: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 15: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 16: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 17: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 18: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 19: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 20: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 21: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 22: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 23: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 24: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 25: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 26: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre> Answer 27: <p>If you sample elements from a uniform distribtution over $[-1,1]$ and apply the Gram Schmidt procedure, you can generate every possible orthogonal matrix (note that orthogonal matrices necessarily have elements within $[-1,1]$). However, I don't believe that it will generate all matrices with equal probability.</p> <p>See <a href="https://arxiv.org/abs/math-ph/0609050" rel="noreferrer">this paper</a> for further discussion, and a method that produces a uniformly random unitary matrix.</p> Answer 28: <p>You can now create a random orthogonal matrix in <strong>Python</strong> using <code>scipy</code>:</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html" rel="noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html</a></p> <p>For example:</p> <pre><code>import numpy as np from scipy.stats import ortho_group m = ortho_group.rvs(dim=5) mm = np.matmul(m.T, m) print(m) print(mm) </code></pre> <p>Output would be:</p> <pre><code>&gt;&gt;&gt; print(m) [[ 0.14335349 0.54646227 0.70593162 -0.3494284 -0.24574194] [-0.20076769 -0.34170533 0.66225963 0.61908678 0.14516728] [-0.37985244 0.04785107 0.1187815 -0.41531806 0.81659308] [ 0.80283293 0.19579297 -0.01096427 0.26347776 0.49757778] [-0.38770287 0.73755803 -0.22100559 0.50271436 0.06426077]] &gt;&gt;&gt; print(mm) [[ 1.00000000e+00 2.21728285e-17 4.32100135e-17 -6.37591413e-17 -1.36638999e-16] [ 2.21728285e-17 1.00000000e+00 -2.79918256e-17 5.41073624e-17 8.23085667e-17] [ 4.32100135e-17 -2.79918256e-17 1.00000000e+00 1.10703727e-16 2.22057189e-16] [-6.37591413e-17 5.41073624e-17 1.10703727e-16 1.00000000e+00 6.48125595e-17] [-1.36638999e-16 8.23085667e-17 2.22057189e-16 6.48125595e-17 1.00000000e+00]] </code></pre>
How to verify satisfialibility in a model? (Confusions with G&#246;del&#39;s Completeness Theorem)
2,099,531
7
https://math.stackexchange.com/questions/2099531/how-to-verify-satisfialibility-in-a-model-confusions-with-g%c3%b6dels-completeness
<p>I just cannot believe that Gödel's Completeness Theorem is right. </p> <p>Let say we fixed some first order logic with some structure. Theorem claims that for any sentence $P$ in this logic we have that</p> <p>$$\vdash P\iff \mathcal{M}\vDash P \hspace{5pt}\text{for every model}\hspace{5pt}\mathcal{M}.$$ My doubts are due to the definitions of above symbols. </p> <p>$\vdash P$ means that there is a formal proof of $P$ with respect to the classical logic.</p> <p>$\mathcal{M}\vDash P$ means that $P$ is true in $\mathcal{M}$. Which means that $P$ treated as formula is satisfied for any assignment of values $g.$ It is denoted as $\mathcal{M},g\vDash P.$ <a href="http://www.coli.uni-saarland.de/projects/milca/courses/comsem/html/node16.html#sec_fol.satdef" rel="nofollow noreferrer">Satisfaction as written in this fine site</a> is defined recursively. Consider just first step of recursion, i.e. $$M,g\vDash R(\tau_1,\dots,\tau_k) \hspace{5pt}\text{iff}\hspace{5pt}(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$$ where $R$ is some relation from structure of first order logic and $I^g_F,F$ are parts of interpretation.</p> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable. In fact consider $R$ to be relation $=.$ We can end up with set theoretical situation $\mathfrak{c}=\aleph_1.$ One cannot tell if this hold or not.</p> <p>In short $\vdash P$ seems to be very concrete definition with clear ways of verification. On the other hand $\mathcal{M} \vDash P$ tells us nothing how check anything in model.</p> <p>Please tell me what is wrong with this reasoning.</p>
[ { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" }, { "answer_id": 2099585, "score": 18, "text": "<p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p>\n\n<p>Except for the proof.</p>\n\n<p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href=\"https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising\">is in fact debatable</a>. As you say, it shows that a very concrete statement \"<span class=\"math-container\">$T\\vdash \\varphi$</span>\" is equivalent to a seemingly-much-more-complicated statement \"Every model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span>\", and the complexity of the latter is suggested by the fact that, for fixed <span class=\"math-container\">$M$</span> and <span class=\"math-container\">$\\varphi$</span>, the question \"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\" is in general of very high complexity.</p>\n\n<p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class=\"math-container\">$T$</span> satisfies <span class=\"math-container\">$\\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are \"reasonably simple\" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class=\"math-container\">$T\\models\\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p>\n\n<hr>\n\n<p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p>\n\n<p>Let's think about the contrapositive: that if <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> is consistent, then we can build a model <span class=\"math-container\">$M$</span> of <span class=\"math-container\">$T$</span> with <span class=\"math-container\">$M\\not\\models\\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem \"Does <span class=\"math-container\">$N\\models \\psi$</span>?\" is extremely complicated, the <em>specific</em> question we're interested in (\"Does <span class=\"math-container\">$M\\models\\varphi$</span>?\") may not be so bad.</p>\n\n<p>Here's the natural construction to look at: take the set of terms in the language of <span class=\"math-container\">$T$</span>, and \"mod out\" by <span class=\"math-container\">$(T\\cup\\{\\neg\\varphi\\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, \"<span class=\"math-container\">$1+1$</span>\" and \"<span class=\"math-container\">$(1+1)\\times 1$</span>\" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span>! </p>\n\n<p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> itself (<span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"decide\" certain important questions; alternatively, the language of <span class=\"math-container\">$T\\cup\\{\\neg\\varphi\\}$</span> might not \"have enough terms\" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation \"<span class=\"math-container\">$\\vdash$</span>\". (See e.g. <a href=\"https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness\">these</a> two <a href=\"https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness\">questions</a>, respectively.)</p>\n\n<hr>\n\n<p><strong>Computability-theoretic aside: \"simple\" models.</strong> (No, I don't mean <a href=\"https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/\" rel=\"nofollow noreferrer\">that kind of simplicity</a>.) </p>\n\n<p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p>\n\n<p>It turns out that it is actually surprisingly close to computable - namely, we have:</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$T$</span> is a countable theory in a countable language. Then there is a model of <span class=\"math-container\">$T$</span> which is <strong>low</strong> relative to <span class=\"math-container\">$T$</span>.</p>\n</blockquote>\n\n<p>Here, \"low\" is a computability-theoretic property: a set <span class=\"math-container\">$X$</span> is low if the <a href=\"https://en.wikipedia.org/wiki/Halting_problem\" rel=\"nofollow noreferrer\">halting problem</a> defined <a href=\"https://en.wikipedia.org/wiki/Oracle_machine\" rel=\"nofollow noreferrer\">relative to <span class=\"math-container\">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class=\"math-container\">$A$</span></em> is defined similarly. By comparison, the question \"Does <span class=\"math-container\">$T$</span> prove <span class=\"math-container\">$\\varphi$</span>?\" is at the level of the halting problem relative to <span class=\"math-container\">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p>\n\n<hr>\n\n<p><strong>And set theory has things to say too.</strong></p>\n\n<p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href=\"https://plato.stanford.edu/entries/continuum-hypothesis/\" rel=\"nofollow noreferrer\">\"Does <span class=\"math-container\">$2^{\\aleph_0}=\\aleph_1$</span>?\"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p>\n\n<p>A great example of this is <a href=\"https://en.wikipedia.org/wiki/Second-order_logic\" rel=\"nofollow noreferrer\"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like \"There is an injective, non-surjective function,\" or more formally <span class=\"math-container\">$$\\exists F[\\forall x, y(F(x)=F(y)\\implies x=y)\\wedge \\exists z\\forall x(F(x)\\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p>\n\n<blockquote>\n <p>There is a sentence <span class=\"math-container\">$\\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p>\n</blockquote>\n\n<p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p>\n\n<p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p>\n\n<p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href=\"http://projecteuclid.org/euclid.pl/1235417263\" rel=\"nofollow noreferrer\">abstract model theory</a>!</em></p>\n" }, { "answer_id": 2099553, "score": 5, "text": "<p>Notice that the right side says \"for <em>every</em> model $\\mathcal{M}$\". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p>\n" }, { "answer_id": 2099572, "score": 4, "text": "<blockquote>\n <p>I see the problem cause in some models $(I^g_F(\\tau_1),\\dots,I^g_F(\\tau_k))\\in F(R)$ may be undecidable.</p>\n</blockquote>\n\n<p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p>\n\n<blockquote>\n <p>We can end up with set theoretical situation $\\mathfrak{c}=\\aleph_1$. One cannot tell if this hold or not.</p>\n</blockquote>\n\n<p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>\n" } ]
Question: How to verify satisfialibility in a model? (Confusions with G&#246;del&#39;s Completeness Theorem) Answer 1: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 2: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 3: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 4: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 5: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 6: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 7: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 8: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 9: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 10: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 11: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 12: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 13: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 14: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 15: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 16: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 17: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 18: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 19: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 20: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 21: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 22: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 23: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 24: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 25: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 26: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 27: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 28: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 29: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 30: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 31: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 32: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 33: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 34: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 35: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 36: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 37: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 38: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 39: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p> Answer 40: <p>I would say you are <em>right</em> - there is no reason to expect Goedel's completeness theorem to be true!</p> <p>Except for the proof.</p> <p>I consider the completeness theorem to be the most counterintuitive result in basic logic, far more so than the incompleteness theorem (whose surprisingness <a href="https://math.stackexchange.com/questions/2046180/why-do-we-find-g%C3%B6dels-incompleteness-theorem-surprising">is in fact debatable</a>. As you say, it shows that a very concrete statement "<span class="math-container">$T\vdash \varphi$</span>" is equivalent to a seemingly-much-more-complicated statement "Every model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span>", and the complexity of the latter is suggested by the fact that, for fixed <span class="math-container">$M$</span> and <span class="math-container">$\varphi$</span>, the question "Does <span class="math-container">$M\models\varphi$</span>?" is in general of very high complexity.</p> <p>One way to think about what happens is this: <em>telling whether <strong>every</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span> can be easier than telling if <strong>a specific</strong> model of <span class="math-container">$T$</span> satisfies <span class="math-container">$\varphi$</span></em>. Specific models can be quite complicated; however, every theory will <em>also</em> have models which are "reasonably simple" (see below for more on this). These models will often be unnatural, but they exist, and they bear on the question of whether <span class="math-container">$T\models\varphi$</span>; and it is these models, essentially, that make the question an answerable one.</p> <hr> <p>Having acknowledged that GCT is counterintuitive, let me now try to convince you that it's true.</p> <p>Let's think about the contrapositive: that if <span class="math-container">$T\cup\{\neg\varphi\}$</span> is consistent, then we can build a model <span class="math-container">$M$</span> of <span class="math-container">$T$</span> with <span class="math-container">$M\not\models\varphi$</span>. Note that this makes our job much easier - we only have to build <em>one</em> model! And even though the <em>general</em> problem "Does <span class="math-container">$N\models \psi$</span>?" is extremely complicated, the <em>specific</em> question we're interested in ("Does <span class="math-container">$M\models\varphi$</span>?") may not be so bad.</p> <p>Here's the natural construction to look at: take the set of terms in the language of <span class="math-container">$T$</span>, and "mod out" by <span class="math-container">$(T\cup\{\neg\varphi\})$</span>-provable equivalence. E.g. in an appropriate theory of arithmetic, "<span class="math-container">$1+1$</span>" and "<span class="math-container">$(1+1)\times 1$</span>" are terms which are provably equal, so they represent the same equivalence class. It's not hard to show that the operations and relations of the language are well-defined on these equivalence classes. So there's a natural structure assigned to the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span>; if you think about it for a bit, it should become plausible that this is in fact a model of <span class="math-container">$T\cup\{\neg\varphi\}$</span>! </p> <p>. . . Of course, it's <em>not</em>, though. The proof isn't that simple. But the <em>idea</em> is right, it just needs some work. Some of this work involves improving the theory <span class="math-container">$T\cup\{\neg\varphi\}$</span> itself (<span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "decide" certain important questions; alternatively, the language of <span class="math-container">$T\cup\{\neg\varphi\}$</span> might not "have enough terms" to build the structure we want); the other part of the work involves working with the specific properties of the provability relation "<span class="math-container">$\vdash$</span>". (See e.g. <a href="https://math.stackexchange.com/questions/1449041/why-are-maximal-consistent-sets-essential-to-henkin-proofs-of-completeness">these</a> two <a href="https://math.stackexchange.com/questions/1463478/what-is-the-role-of-the-proof-system-in-a-henkin-proof-of-completeness">questions</a>, respectively.)</p> <hr> <p><strong>Computability-theoretic aside: "simple" models.</strong> (No, I don't mean <a href="https://ocw.mit.edu/courses/mathematics/18-996a-simplicity-theory-spring-2004/" rel="nofollow noreferrer">that kind of simplicity</a>.) </p> <p>While the completeness theorem lives solidly in model theory, interesting things happen if we bring <em>computability</em> into the picture. The proof of the Completeness Theorem gestured vaguely at above is a <em>construction</em> of a model. It's not hard to see that it is not a computable construction in any good sense, but that's not the end of the story: we can ask <em>how noncomputable it is</em>. </p> <p>It turns out that it is actually surprisingly close to computable - namely, we have:</p> <blockquote> <p>Suppose <span class="math-container">$T$</span> is a countable theory in a countable language. Then there is a model of <span class="math-container">$T$</span> which is <strong>low</strong> relative to <span class="math-container">$T$</span>.</p> </blockquote> <p>Here, "low" is a computability-theoretic property: a set <span class="math-container">$X$</span> is low if the <a href="https://en.wikipedia.org/wiki/Halting_problem" rel="nofollow noreferrer">halting problem</a> defined <a href="https://en.wikipedia.org/wiki/Oracle_machine" rel="nofollow noreferrer">relative to <span class="math-container">$X$</span></a> is no more complicated than the classical halting problem. Lowness <em>relative to <span class="math-container">$A$</span></em> is defined similarly. By comparison, the question "Does <span class="math-container">$T$</span> prove <span class="math-container">$\varphi$</span>?" is at the level of the halting problem relative to <span class="math-container">$T$</span>; so actually, we're building a model that is <em>much simpler</em> than the original provability question!</p> <hr> <p><strong>And set theory has things to say too.</strong></p> <p>By reducing everything to finite combinatorics, the completeness theorem shows that questions like <a href="https://plato.stanford.edu/entries/continuum-hypothesis/" rel="nofollow noreferrer">"Does <span class="math-container">$2^{\aleph_0}=\aleph_1$</span>?"</a> will never arise when trying to tell whether every model of of some theory satisfy some sentence. But this is just a result about <em>first-order</em> logic! For general logics, set-theoretic issues can indeed crop up.</p> <p>A great example of this is <a href="https://en.wikipedia.org/wiki/Second-order_logic" rel="nofollow noreferrer"><strong>second-order logic</strong></a> (with the standard, as opposed to Henkin, semantics; the Henkin semantics makes it essentially equivalent to first-order logic). Second-order logic allows you to quantify over relations and functions on the domain, in addition to individuals (which is what first-order logic lets you quantify over). For instance, there is a second-order sentence true in exactly the infinite structures: it looks like "There is an injective, non-surjective function," or more formally <span class="math-container">$$\exists F[\forall x, y(F(x)=F(y)\implies x=y)\wedge \exists z\forall x(F(x)\not=z)].$$</span> So <em>Compactness</em> immediately fails for second-order logic. But the one that takes the cake is:</p> <blockquote> <p>There is a sentence <span class="math-container">$\chi$</span> in second-order logic, which is a <strong>validity</strong> (= true in every structure) if and only if the Continuum Hypothesis is true.</p> </blockquote> <p>This takes a bit of work to prove; if you're interested, I'll add its construction in detail. </p> <p>My point is that set-theoretic issues do indeed come up when trying to analyze the satisfaction relation for <em>arbitrary</em> logics; but the completeness theorem shows that first-order logic is <em>especially nice</em>. And this is, indeed, a very nontrivial fact!</p> <p><em>If you're interested in comparing first-order logic with other logics, you may be interested in <a href="http://projecteuclid.org/euclid.pl/1235417263" rel="nofollow noreferrer">abstract model theory</a>!</em></p> Answer 41: <p>Notice that the right side says "for <em>every</em> model $\mathcal{M}$". If something is undecidable, then it will be true for some models and false for others. So the right side fails, and so does the left side, i.e., there is no formal proof of an undecidable statement in first-order logic. This is all as expected.</p> Answer 42: <blockquote> <p>I see the problem cause in some models $(I^g_F(\tau_1),\dots,I^g_F(\tau_k))\in F(R)$ may be undecidable.</p> </blockquote> <p>I think you are confusing provability with satisfaction, since for any particular model a sentence will always either be true or false. Undecidability applies to provability, since it may be the case that a sentence is true in some models and not others.</p> <blockquote> <p>We can end up with set theoretical situation $\mathfrak{c}=\aleph_1$. One cannot tell if this hold or not.</p> </blockquote> <p>We can tell if it holds <em>for some particular model</em>. Remember that Godel and Cohen together showed that CH holds in some models of ZFC and not others. Completeness simply says that this one fact is enough to prove that CH is independent of ZFC.</p>
it is correct to write 0,000 or just 0?
2,099,890
7
https://math.stackexchange.com/questions/2099890/it-is-correct-to-write-0-000-or-just-0
<p>Can you say me please if it is correct to write $0,000$ or just $0$? I have to complete a table and there are decimal numbers, and I don't know how to do in the case I have an absolute zero.</p>
[ { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" }, { "answer_id": 2099901, "score": 11, "text": "<p>Mathematically they are equivalent. </p>\n\n<p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\\pm0.0005$ (or perhaps slightly larger error marigin).</p>\n\n<p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p>\n\n<p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p>\n" }, { "answer_id": 2099899, "score": 4, "text": "<p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p>\n\n<p>One notable exception is as Laray suggests: zero written as \"$0$\" will stand out in the table if all other values have several decimal places.</p>\n\n<p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p>\n" }, { "answer_id": 2099924, "score": 1, "text": "<p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p>\n\n<p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>\n" } ]
Question: it is correct to write 0,000 or just 0? Answer 1: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 2: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 3: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 4: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 5: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 6: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 7: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 8: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 9: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 10: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 11: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 12: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 13: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 14: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 15: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 16: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 17: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 18: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 19: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 20: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 21: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 22: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 23: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 24: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 25: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 26: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 27: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 28: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 29: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 30: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 31: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 32: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 33: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 34: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 35: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 36: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 37: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 38: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 39: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p> Answer 40: <p>Mathematically they are equivalent. </p> <p>However there are conventions about accuracy that make them different. When writing $0.000$ it is basically assumed that those written digits are correct, but there may be errors further down the expansion. That is it means something like $0.000\pm0.0005$ (or perhaps slightly larger error marigin).</p> <p>On the other hand if you only write $0$ it might be interpreted as meaning exactly that (without any error marigin). However if you have error marigin on the other numbers it's likely that there's some error for the zero values as well and that they are not different from the other values (except they're close to zero). One should therefore probably use the same number of decimals there as for the other values.</p> <p>Also in tables one should distinguish between the quantity $0$ and that there is no quantity (which is normally indicated by a dash).</p> Answer 41: <p>One typically uses the same number of decimal places for all entries in the table, both for aesthetic reasons and to indicate the precision of the values if they are computed numerically.</p> <p>One notable exception is as Laray suggests: zero written as "$0$" will stand out in the table if all other values have several decimal places.</p> <p>Another exception would be if different groups are computed differently--for example, a column of population counts might all have no decimal places, while a column of average ages might have a couple of decimal places each.</p> Answer 42: <p>$0,000$ cannot be taken wrong, it clearly denotes the value $0$. By the way, the same question holds for every trailing zeroes. F.i. $3,45000$ is all right for $3,45$.</p> <p>Anyway, as said by @skyking, showing the trailing zeroes stresses that they are exact.</p>
Prob. 5, Chap. 4 in Baby Rudin: Continuous extension of a function defined on a closed set
2,100,722
7
https://math.stackexchange.com/questions/2100722/prob-5-chap-4-in-baby-rudin-continuous-extension-of-a-function-defined-on-a
<p>Here is Prob. 5, Chap. 4 in the book <em>Principles of Mathematical Analysis</em> by Walter Rudin, 3rd edition:</p> <blockquote> <p>If <span class="math-container">$f$</span> is a real continuous function defined on a closed set <span class="math-container">$E \subset \mathbb{R}^1$</span>, prove that there exist real continuous functions <span class="math-container">$g$</span> on <span class="math-container">$\mathbb{R}^1$</span> such that <span class="math-container">$g(x) = f(x)$</span> for all <span class="math-container">$x \in E$</span>. (Such functions <span class="math-container">$g$</span> are called <em>continuous extensions</em> of <span class="math-container">$f$</span> from <span class="math-container">$E$</span> to <span class="math-container">$\mathbb{R}^1$</span>.) Show that the result becomes false if the word &quot;closed&quot; is omitted. Extend the result to vector-valued functions. <em>Hint:</em> Let the graph of <span class="math-container">$g$</span> be a straight line on each of the segments which constitute the complement of <span class="math-container">$E$</span> (compare Exercise 29, Chap. 2). The result remains true if <span class="math-container">$\mathbb{R}^1$</span> is replaced by any metric space, but the proof is not so simple.</p> </blockquote> <p>First of all, we show that the word &quot;closed&quot; is essential. Let <span class="math-container">$E = (-\infty, 0) \cup (0, +\infty)$</span>, and let <span class="math-container">$f \colon E \to \mathbb{R}^1$</span> be defined as <span class="math-container">$$ f(x) = \frac{1}{x} \ \mbox{ for all } x \in E.$$</span> Then the set <span class="math-container">$E$</span> is not closed in <span class="math-container">$\mathbb{R}^1$</span>, but there is no continuous function <span class="math-container">$g \colon \mathbb{R}^1 \to \mathbb{R}^1$</span> such that <span class="math-container">$g(x) = f(x)$</span> for all <span class="math-container">$x \in E$</span>, although <span class="math-container">$f$</span> is certainly continuous. Am I right?</p> <p>Now we show the main result:</p> <p>First of all, here is Exercise 29, Chap. 2 in Baby Rudin, 3rd edition:</p> <blockquote> <p>Prove that every open set in <span class="math-container">$\mathbb{R}^1$</span> is the union of an at most countable collection of disjoint segments. ...</p> </blockquote> <p>Since <span class="math-container">$\mathbb{R}^1 - E$</span> is open in <span class="math-container">$\mathbb{R}^1$</span>, it is the union of an at most countable collection of disjoint segments <span class="math-container">$\left\{ \left(a_n, b_n \right) \right\}_{n \in K}$</span>, where <span class="math-container">$K$</span> is either the set <span class="math-container">$J = \left\{ 1, 2, 3, \ldots \right\}$</span> or <span class="math-container">$K$</span> is some <span class="math-container">$J_N = \left\{ 1, \ldots, N \right\}$</span> for some <span class="math-container">$N \in \mathbb{N}$</span>; moreover, <span class="math-container">$\left( a_m, b_m \right) \cap \left( a_n, b_n \right) = \emptyset$</span> for any two distinct <span class="math-container">$m, n \in K$</span>; finally some <span class="math-container">$\left( a_n, b_n \right)$</span> can possibly be infinite.</p> <p>Now since <span class="math-container">$$\mathbb{R}^1 - E = \cup_{n \in K} \left( a_n, b_n \right),$$</span> each of the (finite) endpoints of these segments is an element of set <span class="math-container">$E$</span>.</p> <p>Now if some segment <span class="math-container">$\left( a_n, b_n \right)$</span> has <span class="math-container">$-\infty$</span> as its left endpoint, then we put <span class="math-container">$g(x) = f \left( b_n \right)$</span> for all <span class="math-container">$x \in \left( a_n, b_n \right)$</span>.</p> <p>If some segment <span class="math-container">$\left(a_m, b_m \right)$</span> has <span class="math-container">$+\infty$</span> as its right endpoint, then we put <span class="math-container">$g(x) = f\left( a_m \right)$</span> for all <span class="math-container">$x \in \left( a_m, b_m \right)$</span>.</p> <p>And, for any segment <span class="math-container">$\left( a_n, b_n \right)$</span>, where <span class="math-container">$-\infty &lt; a_n &lt; b_n &lt; +\infty$</span>, we put <span class="math-container">$$ g(x) = f\left( a_n \right) + \left[ \frac{ f\left( b_n \right) - f\left( a_n \right)}{b_n - a_n} \right] \left( x - a_n \right) $$</span> for all <span class="math-container">$x \in \left( a_n, b_n \right)$</span>.</p> <p>I hope I've used the hint given by Rudin correctly. If so, then how to rigorously show that the resulting real function <span class="math-container">$g$</span> defined on <span class="math-container">$\mathbb{R}^1$</span> is continuous on all of <span class="math-container">$\mathbb{R}^1$</span>? This is intuitively clear though, but how does the <em>countability</em> of these segments become significant? Up to this point, Rudin hasn't stated any result about the continuity of a function defined using several continuous functions as pieces, like our function <span class="math-container">$g$</span> is defined here.</p> <p>Moreover, are there any other possibilities for <span class="math-container">$g$</span> besides the one we have defined above?</p> <p>Now we state the generalization of the above result for vector-valued functions.</p> <blockquote> <p>Let <span class="math-container">$E$</span> be a closed set in <span class="math-container">$\mathbb{R}^1$</span>, and let <span class="math-container">$\mathbf{f}$</span> be a continuous function defined on <span class="math-container">$E$</span> with values in some <span class="math-container">$\mathbb{R}^k$</span>. Then there are continuous functions <span class="math-container">$\mathbf{g}$</span> defined on <span class="math-container">$\mathbb{R}^1$</span> with values in the same <span class="math-container">$\mathbb{R}^k$</span> such that <span class="math-container">$\mathbf{g}(x) = \mathbb{f}(x)$</span> for all <span class="math-container">$x \in E$</span>.</p> </blockquote> <p>In order to prove this generalized result, let's put <span class="math-container">$\mathbb{f}(x) = \left( f_1(x), \ldots, f_k(x) \right)$</span> for all <span class="math-container">$x \in E$</span>, where <span class="math-container">$f_1, \ldots, f_k$</span> are real continuous functions defined on <span class="math-container">$E$</span>, and each of these functions we can continuously extend to all of <span class="math-container">$\mathbb{R}^1$</span>, thereby getting a continuous extension <span class="math-container">$\mathbf{g}$</span> of <span class="math-container">$\mathbf{f}$</span> from <span class="math-container">$E$</span> to all of <span class="math-container">$\mathbb{R}^1$</span>. Is this reasoning correct?</p> <p>Last but not the least, I have the following query.</p> <blockquote> <p>Let <span class="math-container">$\left( X, d_X \right)$</span> and <span class="math-container">$\left( Y, d_Y \right)$</span> be metric spaces, let <span class="math-container">$E \subset X$</span> such that <span class="math-container">$E$</span> is closed in <span class="math-container">$\left( X, d_X \right)$</span>, and let <span class="math-container">$f$</span> be a continuous mapping of the induced metric space <span class="math-container">$E$</span> into the metric space <span class="math-container">$Y$</span>. Then can we find---or prove the existence of---a continuous mapping <span class="math-container">$g$</span> of <span class="math-container">$X$</span> into <span class="math-container">$Y$</span> such that <span class="math-container">$g(x) = f(x)$</span> for all <span class="math-container">$x \in E$</span>?</p> <p>If so, then how to find such a map or prove it exists? If not, then what is the general result that Rudin is referring to and how to come up with the not-so-simple proof that he has alluded to?</p> </blockquote>
[ { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" }, { "answer_id": 2195643, "score": 2, "text": "<p>Your proof that closedness is necessary is correct.</p>\n\n<p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p>\n\n<p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, \"the $x$ such that $f(x)$ takes on the maximum value\" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p>\n\n<p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p>\n\n<p>The rest of your question has been already answered.</p>\n\n<p>Your reasoning about $\\mathbb{R}^n$ is correct.</p>\n" }, { "answer_id": 2100986, "score": 1, "text": "<p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\\mathbb R$ \\ $E=\\cup J$ where each member of $J$ is $\\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p>\n\n<p>$g$ is continuous at each $x\\in \\mathbb R$ \\ $E$ because $x\\in j\\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p>\n\n<p>For $x\\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p>\n\n<p>(1). If $T=\\{n: x_n\\in E\\}$ is infinite, then since $f$ is continuous on $E$ we have $\\lim_{n\\to\\infty\\land n\\in T}g(x_n)=\\lim_{n\\to \\infty \\land x\\in T}f(x_n)=f(x)=g(x).$</p>\n\n<p>(2). If $U=\\{n:x_n \\not \\in E\\}$ is infinite, let $U^+=\\{n\\in U:x_n&gt;x\\}$ and $U^-=\\{n:x_n&lt;x\\}.$ </p>\n\n<p>If $U^+$ is infinite then either </p>\n\n<p>(i). $x=\\inf j$ for some (unique)$j\\in J,$ and by the def'n of $g|_{j}$ we have $\\lim_{n\\to \\infty\\land n\\in U^+}g(x_n)=f(x).$ </p>\n\n<p>(ii). OR $x_n\\in j_n=(a_n,b_n)\\in J$ with $b_n&lt;\\infty$, for all but finitely many $n\\in U^+$ and we will have $b_n\\to x$ and $a_n\\to x$ .Now $$\\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\\max (f(a_n),f(b_n)).$$ Since $f(a_n)\\to f(x)=g(x)$ and $f(b_n)\\to f(x)=g(x)$ we have $g(x_n)\\to g(x).$</p>\n\n<p>The case for infinite $U^-$ is handled similarly. </p>\n\n<p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\\to \\mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\\to \\mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p>\n" }, { "answer_id": 2195581, "score": 1, "text": "<p>I haven't the time to check your proof right now. But to address your other questions:</p>\n\n<ul>\n<li><p>Extension to the case where the codomain is $\\mathbb{R}^d$ is immediate, as you say.</p></li>\n<li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \\{0,1\\}$. Let $Y = \\{0,1\\}$ with its discrete metric. Define $f : E \\to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\\{0,1\\}$.</p></li>\n<li><p>The general result which Rudin alludes to is the <a href=\"https://en.wikipedia.org/wiki/Tietze_extension_theorem\" rel=\"nofollow noreferrer\">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li>\n</ul>\n" } ]
Question: Prob. 5, Chap. 4 in Baby Rudin: Continuous extension of a function defined on a closed set Answer 1: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 2: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 3: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 4: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 5: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 6: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 7: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 8: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 9: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 10: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 11: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 12: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 13: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 14: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 15: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 16: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 17: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 18: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 19: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 20: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 21: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 22: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 23: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 24: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 25: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 26: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 27: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 28: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 29: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 30: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 31: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 32: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 33: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 34: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 35: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 36: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 37: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 38: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 39: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul> Answer 40: <p>Your proof that closedness is necessary is correct.</p> <p>Your proof of the theorem is correct, and is what I think Rudin intended with the hint. The countability of the segments is not important and can be ignored in the proof if you are willing to index the interval with an arbitrary ordinal.</p> <p>To prove that it is correct, one reasons as follows: a function is continuous if it is continuous at every point. Continuity at a point is a <em>local property</em> which means that it only takes into account a small neighborhood of the point in question (in contrast, "the $x$ such that $f(x)$ takes on the maximum value" is not a local property). If $x$ is in the domain of the original function and not at an exterior point, then in a small enough neighborhood of $x$, $f=g$. Thus $g$ is continuous at those points because $f$ is. Similarly, on the complement of the region on which $f$ is defined, we find out that on a small enough interval, $g$ agrees with a linear function, and linear functions are known to be continuous. Thus $g$ is continuous everywhere except possibly at the boundary of the region at which $f$ is defined. To show that $g$ is continuous at these points, compute the left and right limits separately, and then see that they agree and agree with the actual value of $g$.</p> <p>One thing you'll notice about this proof is that the only thing that matters about the linear function is that it is continuous and has limits that make $g$ continuous at the exterior of the region on which $f$ is defined. In fact, if you take <em>any continuous function</em> that takes on the same values as $f$ on said regions, you can use those to stitch together $f$. If you draw a picture, it should be visually clear that you can take your $g$ and introduce whatever wiggles you wish into the straight lines without violating continuity, as long as you keep the wiggles themselves as being continuous.</p> <p>The rest of your question has been already answered.</p> <p>Your reasoning about $\mathbb{R}^n$ is correct.</p> Answer 41: <p>The countability of the open intervals comprising the complement of $E$ is not relevant.It is enough that $\mathbb R$ \ $E=\cup J$ where each member of $J$ is $\mathbb R$ or an open half-line or a non-empty bounded open interval, and that the members of $J$ are pair-wise disjoint.</p> <p>$g$ is continuous at each $x\in \mathbb R$ \ $E$ because $x\in j\in J$ for some $j,$ and j is open, and $g$ is linear on $j.$</p> <p>For $x\in E$ let $S=(x_n)_n$ be a sequence converging to $x$:</p> <p>(1). If $T=\{n: x_n\in E\}$ is infinite, then since $f$ is continuous on $E$ we have $\lim_{n\to\infty\land n\in T}g(x_n)=\lim_{n\to \infty \land x\in T}f(x_n)=f(x)=g(x).$</p> <p>(2). If $U=\{n:x_n \not \in E\}$ is infinite, let $U^+=\{n\in U:x_n&gt;x\}$ and $U^-=\{n:x_n&lt;x\}.$ </p> <p>If $U^+$ is infinite then either </p> <p>(i). $x=\inf j$ for some (unique)$j\in J,$ and by the def'n of $g|_{j}$ we have $\lim_{n\to \infty\land n\in U^+}g(x_n)=f(x).$ </p> <p>(ii). OR $x_n\in j_n=(a_n,b_n)\in J$ with $b_n&lt;\infty$, for all but finitely many $n\in U^+$ and we will have $b_n\to x$ and $a_n\to x$ .Now $$\min (f(a_n),f(b_n))&lt;g(x_n)&lt;\max (f(a_n),f(b_n)).$$ Since $f(a_n)\to f(x)=g(x)$ and $f(b_n)\to f(x)=g(x)$ we have $g(x_n)\to g(x).$</p> <p>The case for infinite $U^-$ is handled similarly. </p> <p>An exercise in General Topology,by Engelking, in the chapter Metrizable Spaces, is to prove that if $E$ is a closed subset of the metric space $X$ and $f:E\to \mathbb R$ is continuous then $f$ can be extended to a continuous $g:X\to \mathbb R.$ He provides a formula for $g$ as a hint, and the rest is very easy. And right now I can't recall the formula and I can't find the book. (How can you not find an 800+ page orange-covered book?)</p> Answer 42: <p>I haven't the time to check your proof right now. But to address your other questions:</p> <ul> <li><p>Extension to the case where the codomain is $\mathbb{R}^d$ is immediate, as you say.</p></li> <li><p>The theorem is not true for two general metric spaces. Take $X = [0,1]$ with its usual metric and let $E = \{0,1\}$. Let $Y = \{0,1\}$ with its discrete metric. Define $f : E \to Y$ by $f(0)=0$, $f(1)=1$. No continuous extension $g$ can exist because it would map the connected space $[0,1]$ onto the disconnected space $\{0,1\}$.</p></li> <li><p>The general result which Rudin alludes to is the <a href="https://en.wikipedia.org/wiki/Tietze_extension_theorem" rel="nofollow noreferrer">Tietze extension theorem</a> and you can find its proof in most topology textbooks.</p></li> </ul>
Does the converse of the chain rule hold in general?
2,100,878
7
https://math.stackexchange.com/questions/2100878/does-the-converse-of-the-chain-rule-hold-in-general
<p>I've found the following theorem for single variable real valued functions:</p> <blockquote> <p><strong>The converse of the chain rule:</strong> Suppose that $f,g$ and $u$ are related so that $f(x)=g(u(x))$. If $u$ is continuous at $x_0$, $f'(x_0)$ exists and $g'(u(x_0))$ exists and is non-zero; then $u'(x_0)$ is defined and we have: $$f'(x_0)=g'(u(x_0))u'(x_0)$$</p> </blockquote> <p>I'm interested in knowing if this "other converse" also holds:</p> <blockquote> <p>Suppose that $f,g$ and $u$ are related so that $f(x)=g(u(x))$. If $g$ is continuous at $u(x_0)$, $f'(x_0)$ exists and $u'(x_0)$ exists and is non-zero; then $g'(u(x_0))$ is defined and we have: $$f'(x_0)=g'(u(x_0))u'(x_0)$$</p> </blockquote> <p>If anyone is interested <a href="https://www.google.com.uy/url?url=https://www.csie.ntu.edu.tw/~b89089/book/Apostol/ch5.pdf&amp;rct=j&amp;sa=U&amp;ved=0ahUKEwipmfKm0sfRAhUDj5AKHYYdA-EQFggqMAM&amp;sig2=yxF2trFGLN7ggWOvut_5dg&amp;q=converse+of+chain+rule&amp;usg=AFQjCNFCdmKrTDXjgc6hSeJk5oefBTBFfw">here</a> is the proof of the first one (page 12), I tryed to proved the second one in an analogous way but I didn't succeed. For my immediate purposes I would be happy with knowing it holds, and in case it doesn't some counterexample could be instructive.<br> I'm also interested in knowing if those statements generalises in the obvious way to functions from $\mathbb{R}^n$ to $\mathbb{R}^m$.</p> <p>Thanks in advance</p>
[ { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" }, { "answer_id": 3473312, "score": 2, "text": "<p>The answer to the first question has been mentioned by user384138 in the comments.</p>\n\n<p><strong>A counterexample</strong> for the second question:</p>\n\n<p>Let <span class=\"math-container\">$x_0=0$</span>, <span class=\"math-container\">$g(x)=\\begin{cases}x\\sin(1/x)&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, and <span class=\"math-container\">$u(x)=\\begin{cases}(\\pi\\lfloor1/\\pi x\\rfloor)^{-1}&amp;x\\ne0\\\\0&amp;x=0\\end{cases}$</span>, then <span class=\"math-container\">$f(x)=(g\\circ u)(x)\\equiv0$</span>. We have <span class=\"math-container\">$u'(0)=\\lim\\limits_{x\\to0}\\,(\\pi x\\lfloor 1/\\pi x\\rfloor)^{-1}=1$</span>, <span class=\"math-container\">$g$</span> is continuous everywhere, and <span class=\"math-container\">$g$</span> is not differentiable at <span class=\"math-container\">$u(0)=0$</span>.</p>\n\n<p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class=\"math-container\">$\\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p>\n\n<blockquote>\n <p>Suppose <span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0\\in\\mathbb{R}$</span>, <span class=\"math-container\">$u'(x_0)$</span> exists and is nonzero, <span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0)$</span>, and <span class=\"math-container\">$f=g\\circ u$</span> is differentiable at <span class=\"math-container\">$x_0$</span>. Then <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0)$</span>, with <span class=\"math-container\">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p>\n</blockquote>\n\n<p><em>Proof</em>. Without loss of generality, suppose <span class=\"math-container\">$x_0=0$</span> and <span class=\"math-container\">$u'(0)&gt;0$</span>. In a neighborhood of <span class=\"math-container\">$0$</span> we have <span class=\"math-container\">$u(x)=u(0)+u'(0)x+\\varepsilon(x)x$</span> where <span class=\"math-container\">$\\lim_{x\\to0}\\varepsilon(x)=\\varepsilon(0)=0$</span>. Choose <span class=\"math-container\">$\\delta&gt;0$</span> such that for <span class=\"math-container\">$0&lt;x&lt;\\delta$</span>, <span class=\"math-container\">$|\\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class=\"math-container\">$0&lt;x&lt;\\delta u'(0)/2&lt;2\\delta u'(0)$</span> we have\n<span class=\"math-container\">$$\\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\\varepsilon(2x/u'(0))2x/u'(0)\\\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\\\&amp;=u(0)+x\\\\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\\varepsilon(x/2u'(0))x/2u'(0)\\\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\\\&amp;=u(0)+3x/4&lt;u(0)+x.\\end{aligned}$$</span>\nSimilarly we may deal with the case <span class=\"math-container\">$x&lt;0$</span> and conclude that if <span class=\"math-container\">$|x|$</span> is sufficiently small, by continuity of <span class=\"math-container\">$u$</span> there exists <span class=\"math-container\">$y$</span> between <span class=\"math-container\">$2x/u'(0)$</span> and <span class=\"math-container\">$x/2u'(0)$</span> such that <span class=\"math-container\">$u(y)=u(0)+x$</span>. We have\n<span class=\"math-container\">$$\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\\lim_{x\\to0}\\frac{g(u(x))-g(u(0))}{x}\\Big/\\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span>\nFor any sequence <span class=\"math-container\">$x_n\\to0$</span> where <span class=\"math-container\">$|x_n|&lt;\\delta u'(0)/2$</span>, choose <span class=\"math-container\">$y_n$</span> between <span class=\"math-container\">$x_n/2u'(0)$</span> and <span class=\"math-container\">$2x_n/u'(0)$</span> such that <span class=\"math-container\">$u(y_n)=u(0)+x_n$</span>, then <span class=\"math-container\">$y_n\\to0$</span> and\n<span class=\"math-container\">$$\\lim_{n\\to\\infty}\\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\\lim_{n\\to\\infty}\\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span>\nso <span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(0)$</span> with <span class=\"math-container\">$g'(u(0))=f'(0)/u'(0)$</span>. <span class=\"math-container\">$\\quad\\square$</span></p>\n" }, { "answer_id": 3483842, "score": 1, "text": "<p>This is not an answer, but we have several versions of a \"chain rule\" here and I thought it would be useful to be able to compare them side by side,\nand this won't fit in a comment.</p>\n\n<p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the \"converse\" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the \"converse\" (since the other variants also use this notation).</p>\n\n<p>So we have <span class=\"math-container\">$f(x) = g(u(x))$</span> for all <span class=\"math-container\">$x$</span> in a suitable domain, and <span class=\"math-container\">$x_0$</span> is in that domain.</p>\n\n<hr>\n\n<p>For the usual Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>(The text also gives a condition about <span class=\"math-container\">$u(x_0)$</span> being in the interior of the image of <span class=\"math-container\">$u$</span> that other sources omit. I think it actually is unnecessary.)</p>\n\n<hr>\n\n<p>For the \"converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g'(u(x_0)) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>It's a \"converse\" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the \"converse\" still requires <span class=\"math-container\">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p>\n\n<hr>\n\n<p>For the proposed \"other converse\" of the Chain Rule, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is continuous at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<hr>\n\n<p>For the theorem in Johnson Chen's answer, the conditions are:</p>\n\n<ul>\n<li><span class=\"math-container\">$u$</span> is continuous in a neighborhood of <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$g$</span> is defined in a neighborhood of <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f$</span> is differentiable at <span class=\"math-container\">$x_0.$</span></li>\n<li><span class=\"math-container\">$u'(x_0) \\neq 0.$</span></li>\n</ul>\n\n<p>The consequences are:</p>\n\n<ul>\n<li><span class=\"math-container\">$g$</span> is differentiable at <span class=\"math-container\">$u(x_0).$</span></li>\n<li><span class=\"math-container\">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li>\n</ul>\n\n<p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p>\n\n<hr>\n" } ]
Question: Does the converse of the chain rule hold in general? Answer 1: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 2: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 3: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 4: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 5: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 6: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 7: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 8: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 9: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 10: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 11: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 12: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 13: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 14: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 15: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 16: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 17: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 18: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 19: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 20: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 21: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 22: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 23: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 24: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 25: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 26: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr> Answer 27: <p>The answer to the first question has been mentioned by user384138 in the comments.</p> <p><strong>A counterexample</strong> for the second question:</p> <p>Let <span class="math-container">$x_0=0$</span>, <span class="math-container">$g(x)=\begin{cases}x\sin(1/x)&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, and <span class="math-container">$u(x)=\begin{cases}(\pi\lfloor1/\pi x\rfloor)^{-1}&amp;x\ne0\\0&amp;x=0\end{cases}$</span>, then <span class="math-container">$f(x)=(g\circ u)(x)\equiv0$</span>. We have <span class="math-container">$u'(0)=\lim\limits_{x\to0}\,(\pi x\lfloor 1/\pi x\rfloor)^{-1}=1$</span>, <span class="math-container">$g$</span> is continuous everywhere, and <span class="math-container">$g$</span> is not differentiable at <span class="math-container">$u(0)=0$</span>.</p> <p><strong>Now we prove</strong> the following. <del>It can be easily generalised (by taking a path), which is left as an exercise.</del> <strong>Edit:</strong> it also holds in <span class="math-container">$\mathbb{R}^n$</span>, but I currently do not know an elementary proof.</p> <blockquote> <p>Suppose <span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0\in\mathbb{R}$</span>, <span class="math-container">$u'(x_0)$</span> exists and is nonzero, <span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0)$</span>, and <span class="math-container">$f=g\circ u$</span> is differentiable at <span class="math-container">$x_0$</span>. Then <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0)$</span>, with <span class="math-container">$f'(x_0)=g'(u(x_0))u'(x_0).$</span></p> </blockquote> <p><em>Proof</em>. Without loss of generality, suppose <span class="math-container">$x_0=0$</span> and <span class="math-container">$u'(0)&gt;0$</span>. In a neighborhood of <span class="math-container">$0$</span> we have <span class="math-container">$u(x)=u(0)+u'(0)x+\varepsilon(x)x$</span> where <span class="math-container">$\lim_{x\to0}\varepsilon(x)=\varepsilon(0)=0$</span>. Choose <span class="math-container">$\delta&gt;0$</span> such that for <span class="math-container">$0&lt;x&lt;\delta$</span>, <span class="math-container">$|\varepsilon(x)|&lt;u'(0)/2$</span>. For <span class="math-container">$0&lt;x&lt;\delta u'(0)/2&lt;2\delta u'(0)$</span> we have <span class="math-container">$$\begin{aligned}u(2x/u'(0))&amp;=u(0)+2x+\varepsilon(2x/u'(0))2x/u'(0)\\&amp;&gt;u(0)+2x-(u'(0)/2)2x/u'(0)\\&amp;=u(0)+x\\\text{and }u(x/2u'(0))&amp;=u(0)+x/2+\varepsilon(x/2u'(0))x/2u'(0)\\&amp;&lt;u(0)+x/2+(u'(0)/2)x/2u'(0)\\&amp;=u(0)+3x/4&lt;u(0)+x.\end{aligned}$$</span> Similarly we may deal with the case <span class="math-container">$x&lt;0$</span> and conclude that if <span class="math-container">$|x|$</span> is sufficiently small, by continuity of <span class="math-container">$u$</span> there exists <span class="math-container">$y$</span> between <span class="math-container">$2x/u'(0)$</span> and <span class="math-container">$x/2u'(0)$</span> such that <span class="math-container">$u(y)=u(0)+x$</span>. We have <span class="math-container">$$\lim_{x\to0}\frac{g(u(x))-g(u(0))}{u(x)-u(0)}=\lim_{x\to0}\frac{g(u(x))-g(u(0))}{x}\Big/\frac{u(x)-u(0)}{x}=f'(0)/u'(0).$$</span> For any sequence <span class="math-container">$x_n\to0$</span> where <span class="math-container">$|x_n|&lt;\delta u'(0)/2$</span>, choose <span class="math-container">$y_n$</span> between <span class="math-container">$x_n/2u'(0)$</span> and <span class="math-container">$2x_n/u'(0)$</span> such that <span class="math-container">$u(y_n)=u(0)+x_n$</span>, then <span class="math-container">$y_n\to0$</span> and <span class="math-container">$$\lim_{n\to\infty}\frac{g(u(0)+x_n)-g(u(0))}{x_n}=\lim_{n\to\infty}\frac{g(u(y_n))-g(u(0))}{u(y_n)-u(0)}=f'(0)/u'(0),$$</span> so <span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(0)$</span> with <span class="math-container">$g'(u(0))=f'(0)/u'(0)$</span>. <span class="math-container">$\quad\square$</span></p> Answer 28: <p>This is not an answer, but we have several versions of a "chain rule" here and I thought it would be useful to be able to compare them side by side, and this won't fit in a comment.</p> <p>In particular, it's somewhat annoying that the linked text gives the ordinary Chain Rule and the "converse" of the Chain Rule in different notation, so I start by converting the ordinary Chain Rule to the notation of the "converse" (since the other variants also use this notation).</p> <p>So we have <span class="math-container">$f(x) = g(u(x))$</span> for all <span class="math-container">$x$</span> in a suitable domain, and <span class="math-container">$x_0$</span> is in that domain.</p> <hr> <p>For the usual Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>(The text also gives a condition about <span class="math-container">$u(x_0)$</span> being in the interior of the image of <span class="math-container">$u$</span> that other sources omit. I think it actually is unnecessary.)</p> <hr> <p>For the "converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g'(u(x_0)) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>It's a "converse" in the sense that <strong>one</strong> of the conditions in each theorem is a consequence in the other, but the "converse" still requires <span class="math-container">$u$</span> to be continuous and requires an additional condition that has no role in the original Chain Rule.</p> <hr> <p>For the proposed "other converse" of the Chain Rule, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is continuous at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <hr> <p>For the theorem in Johnson Chen's answer, the conditions are:</p> <ul> <li><span class="math-container">$u$</span> is continuous in a neighborhood of <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$g$</span> is defined in a neighborhood of <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0.$</span></li> <li><span class="math-container">$u'(x_0) \neq 0.$</span></li> </ul> <p>The consequences are:</p> <ul> <li><span class="math-container">$g$</span> is differentiable at <span class="math-container">$u(x_0).$</span></li> <li><span class="math-container">$f'(x_0) = g'(u(x_0)) u'(x_0).$</span></li> </ul> <p>This introduces conditions that must hold within neighborhoods, whereas the other versions put conditions only on individual points.</p> <hr>
Primes satisfy $pq+2=rt$
2,101,532
7
https://math.stackexchange.com/questions/2101532/primes-satisfy-pq2-rt
<p>Are there infinitely many primes $p, q, r, t$ such as $pq+2=rt$</p> <p>$3\times{11}+2=5\times{7}$<br> $5\times{11}+2=3\times{19}$</p> <p>I guess this may be an open question?</p>
[ { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" }, { "answer_id": 2101714, "score": 3, "text": "<p>Even the equation </p>\n\n<p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p>\n\n<p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p>\n\n<p>If the generalized bunyakovsky conjecture (See <a href=\"https://en.wikipedia.org/wiki/Bunyakovsky_conjecture\" rel=\"nofollow noreferrer\">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p>\n" }, { "answer_id": 2102028, "score": 2, "text": "<p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p>\n\n<p>This is essentially a problem in <a href=\"https://en.wikipedia.org/wiki/Sieve_theory\" rel=\"nofollow noreferrer\">sieve theory</a>. The <a href=\"https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)\" rel=\"nofollow noreferrer\">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p>\n\n<p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>\n" } ]
Question: Primes satisfy $pq+2=rt$ Answer 1: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 2: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 3: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 4: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 5: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 6: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 7: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 8: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 9: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 10: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 11: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 12: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 13: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 14: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 15: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 16: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 17: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 18: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 19: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 20: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 21: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 22: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 23: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 24: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 25: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 26: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p> Answer 27: <p>Even the equation </p> <p>$$3p+2=5q$$ with primes $p$ and $q$ probably has infinite many solutions :</p> <p>Suppose, $p=5k+1$ and $q=3k+1$ are both prime , then the equation $3p+2=5q$ is satisfied.</p> <p>If the generalized bunyakovsky conjecture (See <a href="https://en.wikipedia.org/wiki/Bunyakovsky_conjecture" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bunyakovsky_conjecture</a> ) is true there are infinite many such $k$, so the given equation probably has infinite many solutions.</p> Answer 28: <p>The question is whether there are infinitely many integers $n$ such that both $n,n+2$ have exactly two prime factors. Whether this is occurs is an open question.</p> <p>This is essentially a problem in <a href="https://en.wikipedia.org/wiki/Sieve_theory" rel="nofollow noreferrer">sieve theory</a>. The <a href="https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)" rel="nofollow noreferrer">parity problem</a> is a fundamental obstruction that prevents us from solving many of these sorts of problems.</p> <p>Essentially, the parity problem prevents us from distinguishing numbers with an even or odd number of prime factors.</p>
What is the meaning of a differential in terms of an exact differential?
2,101,896
7
https://math.stackexchange.com/questions/2101896/what-is-the-meaning-of-a-differential-in-terms-of-an-exact-differential
<p>As I understand it a differential is an outdated concept from the time of Liebniz which was used to define derivatives and integrals before limits came along. As such $dy$ or $dx$ don't really have any meaning on their own. I have seen in multiple places that the idea of thinking of a derivative as a ratio of two infinitesimal change while intuitive is wrong. I understand this, and besides I am not even really sure if there is a rigorous way of saying when a quantity is infinitesimal.</p> <p>Now on the other hand, it have read that you can define these differentials as actual quantities that are approximations in the change of a function. For example for a function of one real variable the differential is the function $df$ of two independent real variables $x$ and $Δx$ given by:</p> <p>$df(x,Δx)=f'(x)Δx$</p> <p>How this then reduces to </p> <p>$df = f'(x)dx$</p> <p>and again what $dx$ means I dont understand. It seems to me that it is simply a linear approximation for the function at a point $x$. However there's no mention of how large or small $dx$ must be, it seems to be just as ill defined as before and I have still found other places referring to it as an infinitesimal even when it has been redefined as here.</p> <p>Anyway ignoring this, I can see how this could then be extended to functions of more than one independent variable</p> <p>$y = f(x_1,....,x_n)$</p> <p>$dy = $$\frac{df}{dx_1}dx_1\ +\ .... \ +\frac{df}{dx_n}dx_n\ $</p> <p>However then the notion of exact and inexact differentials are brought up. This seems like its unrelated but that raise the question of what a differential means in this case.</p> <p>All this comes from a course I am taking in Thermal Physics.<a href="https://i.stack.imgur.com/S7nbf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/S7nbf.png" alt="These are the two slides[![][1]"></a>]<a href="https://i.stack.imgur.com/tP4OL.png" rel="noreferrer">2</a></p> <p>If anyone can enlighten me as to what the concept of differentials means or perhaps direct me towards a book or website where I can study it myself I would be very grateful.</p> <p>An explanation of Schwarz' Theorem in this context would be great too.</p>
[ { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" }, { "answer_id": 2102018, "score": 3, "text": "<ul>\n<li><strong>Motivation</strong></li>\n</ul>\n\n<p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\\boxed{dy = f'(x) \\ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p>\n\n<ul>\n<li><strong>Geometric Intuition</strong></li>\n</ul>\n\n<p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \\ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p>\n\n<p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\\displaystyle{\\frac{dy}{dx} = f'(x)}$ for $dx \\neq 0$. </p>\n\n<ul>\n<li><strong>Brief Historical Context</strong></li>\n</ul>\n\n<p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p>\n\n<ul>\n<li><strong>More In-Depth Discussion</strong></li>\n</ul>\n\n<p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \\ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p>\n\n<p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\\alpha$ so that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = 0$$</p>\n\n<p>This can be reduced to say that </p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\left|\\frac{f(x + \\Delta x) - f(x)}{\\Delta x} - \\alpha \\right| = 0$$</p>\n\n<p>Which is to say</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{f(x + \\Delta x) - f(x)}{\\Delta x} = \\alpha$$</p>\n\n<p>Which just says that $f'(x)$ exists and is finite with value $\\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p>\n\n<p>$$\\lim_{\\Delta x \\to 0} \\frac{|f(x + \\Delta x) - f(x) - \\alpha \\Delta x|}{|\\Delta x|} = \\lim_{dx \\to 0} \\frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p>\n\n<p>because $\\alpha \\ dx = f'(x) \\ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \\to 0$. </p>\n" }, { "answer_id": 2101955, "score": 0, "text": "<p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\\Delta x$. Then the linear approximation theorem says that if $dx = \\Delta x$, then $\\Delta y \\approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p>\n" }, { "answer_id": 2110034, "score": 0, "text": "<p>One dimensional calculus is very special. For example</p>\n\n<ul>\n<li>If $f$ is any continuous function, then $f(x) \\, \\mathrm{d}x$ has an antiderivative</li>\n<li>If $y$ and $u$ depend differentiably on $x$, then $\\mathrm{d}y$ and $\\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\\frac{\\mathrm{d}y}{\\mathrm{d}u}$ (at least wherever $\\mathrm{d}u$ is 'nonzero')</li>\n<li>To talk about definite integrals $\\int_a^b f(x) \\, \\mathrm{d}x$ requires that we only need to specify the endpoints</li>\n</ul>\n\n<p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p>\n\n<p>Exactness is the property of being antidifferentiable:</p>\n\n<blockquote>\n <p>A differential $\\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\\omega = \\mathrm{d}f$</p>\n</blockquote>\n\n<p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\\gamma$ is a path from $P$ to $Q$, then</p>\n\n<p>$$ \\int_\\gamma \\mathrm{d}f = f(Q) - f(P) $$</p>\n\n<p>So hopefully you can see exactness being very important.</p>\n\n<hr>\n\n<p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\\mathrm{d}x \\wedge \\mathrm{d}y = -\\mathrm{d}y \\wedge \\mathrm{d}x$. (and also $\\mathrm{d}x \\wedge \\mathrm{d}x = 0$).</p>\n\n<p>W can use this to differentiate a differential form: to get a 2-form:</p>\n\n<p>$$ \\mathrm{d}\\left( u \\, \\mathrm{d}v \\right) = \\mathrm{d}u \\wedge \\mathrm{d} v $$\nand furthermore to even higher degrees.</p>\n\n<p>For example, in $\\mathbb{R}^2$, we could compute\n$$\\begin{align} \\mathrm{d}\\left(\nf(x,y) \\, \\mathrm{d}x + g(x,y) \\, \\mathrm{d} y \\right)\n&amp;= \\mathrm{d}f(x,y) \\wedge \\mathrm{d}x + \\mathrm{d}g(x,y) \\wedge \\mathrm{d} y\n\\\\&amp;= (f_1(x,y) \\mathrm{d}x + f_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}x\n+ (g_1(x,y) \\mathrm{d}x + g_2(x,y) \\mathrm{d}y) \\wedge \\mathrm{d}y\n\\\\&amp;= (g_1(x,y) - f_2(x,y)) \\mathrm{d}x \\wedge \\mathrm{d}y\n\\end{align}$$</p>\n\n<p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p>\n\n<p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\\omega$ satisfies $\\mathrm{d} \\omega = 0$. These are also special:</p>\n\n<blockquote>\n <p>A differential form $\\omega$ is <em>closed</em> if and only if $\\mathrm{d} \\omega = 0$</p>\n</blockquote>\n\n<p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\\mathbb{R}^n$, every closed differential form is exact.</p>\n\n<p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\\mathbb{R}^n$ it's the only step you need!</p>\n" } ]
Question: What is the meaning of a differential in terms of an exact differential? Answer 1: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 2: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 3: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 4: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 5: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 6: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 7: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 8: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 9: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 10: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 11: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 12: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 13: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 14: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 15: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 16: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 17: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 18: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 19: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 20: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 21: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 22: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 23: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 24: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 25: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 26: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 27: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 28: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 29: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 30: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 31: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 32: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 33: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 34: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 35: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 36: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p> Answer 37: <ul> <li><strong>Motivation</strong></li> </ul> <p>If we write $y = f(x)$, and $f'(x)$ exists for some particular $x$, then it is standard to write $\boxed{dy = f'(x) \ dx}$. If $x$ is a fixed value, then $dy$ is a dependent variable that depends on $dx$ as the independent variable. </p> <ul> <li><strong>Geometric Intuition</strong></li> </ul> <p>Essentially, we're introducing $dy$ and $dx$ as new coordinate axes that meet (and form their origin) at a point that lies on a curve $f(x)$. Consequently, $dy$ and $dx$ act as scales for measurement of the variables $dy$ and $dx$, just as the $x$ and $y$ axes act as scales for measurement of $x$ and $y$. In this coordinate system, you can draw the line $dy = f'(x) \ dx$ -- it is tangent to $y = f(x)$ (at some fixed $x$) and has slope $f'(x)$. The thing is that we're not defining it within the $x$-$y$ coordinate system explicitly, but rather the $dx$-$dy$ coordinate system that we constructed. </p> <p>By looking at it this way, it simplifies this notion down to nothing more than recognizing the slope of a line, because from this, you may derive the popular quotient relation, $\displaystyle{\frac{dy}{dx} = f'(x)}$ for $dx \neq 0$. </p> <ul> <li><strong>Brief Historical Context</strong></li> </ul> <p>The $d$-notation we use for the differentials $dx$ and $dy$ goes back to Leibniz's work in the seventeenth century, but Leibniz never defined the derivative by the limit of a quotient. </p> <ul> <li><strong>More In-Depth Discussion</strong></li> </ul> <p>We define the differential of a function $f(x)$ as a function of $x$ as well as another independent variable $dx$ whose value is given by $df = f'(x) \ dx$. This differential is, indeed, defined at each point where $f'(x)$ exists. It's worth noting that $dx$ can take any value. For a fixed $x$, the value of the differential is just a multiple of $dx$ (since, for a fixed $x$, $f'(x)$ is a constant). </p> <p>Let's re-state differentiability in the following way: a function $f$ is called differentiable at $x$ if it is defined in a neighborhood of $x$ (as well as at $x$ itself) and if there exists some $\alpha$ so that </p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = 0$$</p> <p>This can be reduced to say that </p> <p>$$\lim_{\Delta x \to 0} \left|\frac{f(x + \Delta x) - f(x)}{\Delta x} - \alpha \right| = 0$$</p> <p>Which is to say</p> <p>$$\lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} = \alpha$$</p> <p>Which just says that $f'(x)$ exists and is finite with value $\alpha$. Let the differential of $f$ be denoted by $df(x, dx)$. It's important to note that the dependence of $df$ on $x$ is not the same as its dependence on $dx$, so perhaps the notation I used is not quite clear. We're noting that $df$ depends linearly on $dx$. We can do some re-writing:</p> <p>$$\lim_{\Delta x \to 0} \frac{|f(x + \Delta x) - f(x) - \alpha \Delta x|}{|\Delta x|} = \lim_{dx \to 0} \frac{|f(x + dx) - f(x) - df(x, dx)|}{|dx|}$$</p> <p>because $\alpha \ dx = f'(x) \ dx = df(x, dx)$ (note that, for notational purposes, I've replaced $\Delta x$ with $dx$, since $dx$ can take any value). So, we've learned that, for a fixed $x$, $df$ is a multiple of $dx$ and that the limit relation we established above is valid. What the relation above tells us is that $df(x, dx)$ is a decent approximation to $f(x + dx) - f(x)$ in the sense that the difference $f(x + dx) - f(x)-df(x,dx)$ is very small compared to $|dx|$ as $|dx| \to 0$. </p> Answer 38: <p>You can think of differential (1-forms) as new, independent variables. If $y=f(x)$, then it's convenient to place the origin of the $dx-dy$ axes on the curve given by $y=f(x)$. The new variables are related to $y$ and $x$ by the equation $dy = f'(x)\; dx$. (This is the equation of the tangent line at that point, but given in $dx-dy$ coordinates.) By leaving $dx$ and $dy$ as variable, it's possible to let $dx$ take on any value you like, e.g., $\Delta x$. Then the linear approximation theorem says that if $dx = \Delta x$, then $\Delta y \approx dy.$ And we can see that from the picture described above. This can be extended to functions of more than one variable.</p> Answer 39: <p>One dimensional calculus is very special. For example</p> <ul> <li>If $f$ is any continuous function, then $f(x) \, \mathrm{d}x$ has an antiderivative</li> <li>If $y$ and $u$ depend differentiably on $x$, then $\mathrm{d}y$ and $\mathrm{d}u$ are proportional to each other. This means it makes sense to speak of the ratio $\frac{\mathrm{d}y}{\mathrm{d}u}$ (at least wherever $\mathrm{d}u$ is 'nonzero')</li> <li>To talk about definite integrals $\int_a^b f(x) \, \mathrm{d}x$ requires that we only need to specify the endpoints</li> </ul> <p>In higher dimensions, things don't work out so nicely; when we <em>do</em> have these nice properties, it is a special thing deserving a name.</p> <p>Exactness is the property of being antidifferentiable:</p> <blockquote> <p>A differential $\omega$ is <em>exact</em> if and only if there exists a function $f$ such that $\omega = \mathrm{d}f$</p> </blockquote> <p>It turns out that exact forms satisfy the fundamental theorem of calculus: if $\gamma$ is a path from $P$ to $Q$, then</p> <p>$$ \int_\gamma \mathrm{d}f = f(Q) - f(P) $$</p> <p>So hopefully you can see exactness being very important.</p> <hr> <p>We can define a product on differentials called the <em>wedge product</em>; it sort of behaves like you'd expect a product to behave, except that it has the weird feature that it's antisymmetric on ordinary differentials: $\mathrm{d}x \wedge \mathrm{d}y = -\mathrm{d}y \wedge \mathrm{d}x$. (and also $\mathrm{d}x \wedge \mathrm{d}x = 0$).</p> <p>W can use this to differentiate a differential form: to get a 2-form:</p> <p>$$ \mathrm{d}\left( u \, \mathrm{d}v \right) = \mathrm{d}u \wedge \mathrm{d} v $$ and furthermore to even higher degrees.</p> <p>For example, in $\mathbb{R}^2$, we could compute $$\begin{align} \mathrm{d}\left( f(x,y) \, \mathrm{d}x + g(x,y) \, \mathrm{d} y \right) &amp;= \mathrm{d}f(x,y) \wedge \mathrm{d}x + \mathrm{d}g(x,y) \wedge \mathrm{d} y \\&amp;= (f_1(x,y) \mathrm{d}x + f_2(x,y) \mathrm{d}y) \wedge \mathrm{d}x + (g_1(x,y) \mathrm{d}x + g_2(x,y) \mathrm{d}y) \wedge \mathrm{d}y \\&amp;= (g_1(x,y) - f_2(x,y)) \mathrm{d}x \wedge \mathrm{d}y \end{align}$$</p> <p>(I use $h_n$ to refer to the partial derivative of a function with respect to its $n$-th argument while holding the other arguments constant)</p> <p>You might notice that the test using Schwarz' rule that your image is alluding to boils down to checking if a differential $\omega$ satisfies $\mathrm{d} \omega = 0$. These are also special:</p> <blockquote> <p>A differential form $\omega$ is <em>closed</em> if and only if $\mathrm{d} \omega = 0$</p> </blockquote> <p>It turns out that every exact differential form is closed, and conversely, in extremely nice spaces like $\mathbb{R}^n$, every closed differential form is exact.</p> <p>Since testing if a form is closed is relatively easy and computational, it provides a very nice first step in checking whether a differential is exact &mdash; and in very nice spaces like $\mathbb{R}^n$ it's the only step you need!</p>
Is there a Fibonacci number whose square is another Fibonacci?
2,102,037
7
https://math.stackexchange.com/questions/2102037/is-there-a-fibonacci-number-whose-square-is-another-fibonacci
<h2>Introduction</h2> <p>We’re all familiar with the <a href="https://en.wikipedia.org/wiki/Fibonacci_number" rel="nofollow noreferrer">Fibonacci sequence</a>:</p> <p>$1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 \ldots$</p> <p>I was messing around a little on a piece of paper and came to the conclusion that there wouldn’t be any numbers from the sequence that when raised to the power of two, becomes another number contained by the sequence—except that of $1^2$ however, but we’ll leave that out of it.</p> <h2>Research</h2> <p>Although it occurred to me that there probably isn’t any number from the sequence that has this ability other than that of $1$, I don't know how to prove or disprove this.</p> <h2>Moreover</h2> <p>Whilst I probably could write an application to retrieve the answer for me, I’m more interested in a mathematical solution. Should you clever people prove not to have a great solution, perhaps someone can point me in the right direction?</p>
[ { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" }, { "answer_id": 2102053, "score": 4, "text": "<p>Let $n&gt;1$. Then $F_n\\mid F_m$ if and only if $n\\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p>\n\n<p>This is because we get, inductively, $F_{n+k}\\equiv F_{n+1}F_k\\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p>\n\n<p>But you can use <a href=\"https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression\" rel=\"nofollow noreferrer\">Demoivre's formula</a> to show that for $n\\geq 2$ that $F_n^2&lt;F_{2n}$.</p>\n\n<p>Specifically, with $\\phi=\\frac{1+\\sqrt{5}}{2}$ and $\\rho = \\frac{-1}{\\phi}=\\frac{1-\\sqrt{5}}{2}$ we have that:</p>\n\n<p>$$F_{n}=\\frac 1{\\sqrt{5}}\\left(\\phi^n-\\rho^n\\right)$$</p>\n\n<p>And $F_{2n}=F_n\\left(\\phi^n+\\rho^n\\right)$. So you need to snow that $\\phi^{n}+\\rho^n &gt;\\frac{1}{\\sqrt{5}}(\\phi^n-\\rho^n)$, or $$(\\sqrt{5}-1)\\phi^n&gt;-(\\sqrt{5}+1)\\rho^n$$</p>\n" }, { "answer_id": 2102117, "score": 3, "text": "<p>a non rigorous but maybe easy to understand proof:</p>\n\n<p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with,\n$a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p>\n\n<p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p>\n\n<p>$ax + (x-a)a,$<br>\n$x^2 + a^2 $</p>\n\n<p>are next to each other in the sequence.</p>\n\n<p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p>\n" }, { "answer_id": 2103681, "score": 3, "text": "<p>Working off of Ryan Y's approach:</p>\n\n<p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\\gt F_n^2$ as long as $F_{n-1}^2\\gt 0$, which will happen for all $n\\gt 1$.</p>\n\n<p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\\gt 0$, or $(F_n-F_{n-1})^2\\gt 0$. And so this holds whenever $F_n\\gt F_{n-1}$, which will hold as soon as $n\\gt 2$.</p>\n\n<p>Thus, we have explicitly that $F_{2n-2}\\lt F_n^2\\lt F_{2n-1}$ for all $n\\gt 2$.</p>\n\n<p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href=\"https://math.stackexchange.com/a/1247809/785\">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p>\n" }, { "answer_id": 2363377, "score": 1, "text": "<p>This is a trivial proof from a stronger result:</p>\n\n<p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p>\n\n<p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p>\n\n<p><a href=\"https://math.la.asu.edu/~checkman/SquareFibonacci.html\" rel=\"nofollow noreferrer\">See here for more info</a></p>\n" } ]
Question: Is there a Fibonacci number whose square is another Fibonacci? Answer 1: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 2: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 3: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 4: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 5: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 6: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 7: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 8: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 9: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 10: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 11: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 12: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 13: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 14: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 15: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 16: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 17: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 18: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 19: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 20: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 21: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 22: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 23: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 24: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 25: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 26: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 27: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 28: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 29: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 30: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 31: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 32: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 33: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 34: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 35: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 36: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 37: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 38: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 39: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 40: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 41: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 42: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 43: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 44: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 45: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 46: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 47: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 48: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p> Answer 49: <p>Let $n&gt;1$. Then $F_n\mid F_m$ if and only if $n\mid m$. (Assuming we start at $F_0=0,F_1=1$.)</p> <p>This is because we get, inductively, $F_{n+k}\equiv F_{n+1}F_k\pmod{F_n}$, and $F_{n+1}$ and $F_{n}$ are relatively prime.</p> <p>But you can use <a href="https://en.wikipedia.org/wiki/Fibonacci_number#Closed-form_expression" rel="nofollow noreferrer">Demoivre's formula</a> to show that for $n\geq 2$ that $F_n^2&lt;F_{2n}$.</p> <p>Specifically, with $\phi=\frac{1+\sqrt{5}}{2}$ and $\rho = \frac{-1}{\phi}=\frac{1-\sqrt{5}}{2}$ we have that:</p> <p>$$F_{n}=\frac 1{\sqrt{5}}\left(\phi^n-\rho^n\right)$$</p> <p>And $F_{2n}=F_n\left(\phi^n+\rho^n\right)$. So you need to snow that $\phi^{n}+\rho^n &gt;\frac{1}{\sqrt{5}}(\phi^n-\rho^n)$, or $$(\sqrt{5}-1)\phi^n&gt;-(\sqrt{5}+1)\rho^n$$</p> Answer 50: <p>a non rigorous but maybe easy to understand proof:</p> <p>Assume such a number $x$ exists, $x &gt; 1, a &lt; x,$ then the sequence goes with, $a, x, x+a, 2x+a, 3x+2a, 5x+3a, ...$</p> <p>notice the coefficients of the linear expression are in Fibonacci sequence. So that</p> <p>$ax + (x-a)a,$<br> $x^2 + a^2 $</p> <p>are next to each other in the sequence.</p> <p>But $ax + (x-a)a = x^2 - (x-a)^2 &lt; x^2,$ and the next one $&gt; x^2$. so $x^2$ is not in the sequence.</p> Answer 51: <p>Working off of Ryan Y's approach:</p> <p>Note that we have $F_{2n-1} = F_{n-1}^2+F_n^2$; thus, we know that $F_{2n-1}\gt F_n^2$ as long as $F_{n-1}^2\gt 0$, which will happen for all $n\gt 1$.</p> <p>But we also have $F_{2n-2} = 2F_nF_{n-1}-F_{n-1}^2$, and so $F_n^2\gt F_{2n-2}$ as long as $F_n^2-F_{2n-2}\gt 0$ &mdash; or, in other words, $F_n^2-2F_nF_{n-1}+F_{n-1}^2\gt 0$, or $(F_n-F_{n-1})^2\gt 0$. And so this holds whenever $F_n\gt F_{n-1}$, which will hold as soon as $n\gt 2$.</p> <p>Thus, we have explicitly that $F_{2n-2}\lt F_n^2\lt F_{2n-1}$ for all $n\gt 2$.</p> <p>(These doubling identities are classical, and there are many ways to derive them; I prefer the matrix-based approach seen in <a href="https://math.stackexchange.com/a/1247809/785">this answer</a> because it's very general and it almost immediately generalizes even further to other linear recurrence sequences.)</p> Answer 52: <p>This is a trivial proof from a stronger result:</p> <p>It has been proved that the only squares in the Fibonacci sequence are $1$ and $144.$</p> <p>Since $12$ is not a Fibonacci number, $1^2=1$ is the only solution, but as you mentioned, we are leaving it out.</p> <p><a href="https://math.la.asu.edu/~checkman/SquareFibonacci.html" rel="nofollow noreferrer">See here for more info</a></p>
Completing the square for quadratic equation
2,102,083
7
https://math.stackexchange.com/questions/2102083/completing-the-square-for-quadratic-equation
<p>I'm learning algebra (at age $66$) with home schooling materials. I am doing well and enjoying it. My question is this: The material says I can use "<em>Completing the square</em>" to solve any quadratic equation. I solved this equation $$6x^2+24x=0$$ by factoring (got answers $0$ and $-4$). But when I try and solve the same equation by completing the square I get a different answer ($\pm 12$). I must be doing something wrong but can't figure it out as I think I'm using the right steps to complete the square. Any help appreciated.</p>
[ { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" }, { "answer_id": 2102103, "score": 6, "text": "<p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class=\"math-container\">$x^2$</span> term, which is <span class=\"math-container\">$6$</span> for this problem. Doing this gives <span class=\"math-container\">$x^2 + 4x = 0$</span>. Adding <span class=\"math-container\">$4$</span> to both sides completes the square:</p>\n\n<p><span class=\"math-container\">\\begin{align}\n&amp;x^2 + 4x = 0 \\\\\n&amp;x^2 + 4x + 4 = 4 \\\\\n&amp;(x+2)^2 = 4\n\\end{align}</span></p>\n\n<p>To finish up, take a square root of both sides (and don't forget about the <span class=\"math-container\">$\\pm$</span> when doing this).</p>\n\n<hr>\n\n<p><strong>P.S.</strong> the quadratic formula <span class=\"math-container\">$x = \\displaystyle \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class=\"math-container\">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p>\n" }, { "answer_id": 2102106, "score": 2, "text": "<p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\\bigl((x+2)^2-4\\bigr)=6(x+2)^2-24.$$</p>\n" }, { "answer_id": 2102373, "score": 2, "text": "<p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\\implies x^2+\\dfrac bax+\\dfrac ca=0\\tag1$$\nThis reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\\begin{align*} &amp; 6x^2+24x=0\\\\ &amp; \\implies x^2+4x=0\\\\ &amp; \\implies x^2+4x+\\left(\\dfrac 42\\right)^2=\\left(\\dfrac 42\\right)^2\\\\ &amp; \\implies (x+2)^2=4\\\\ &amp; \\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\ldots\\end{align*}$$\nAnd continue as before.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Something worth noting:</strong></p>\n</blockquote>\n\n<p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>\n" } ]
Question: Completing the square for quadratic equation Answer 1: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 2: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 3: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 4: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 5: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 6: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 7: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 8: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 9: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 10: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 11: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 12: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 13: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 14: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 15: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 16: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 17: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 18: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 19: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 20: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 21: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 22: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 23: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 24: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 25: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 26: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 27: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 28: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 29: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 30: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 31: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 32: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 33: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 34: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 35: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 36: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p> Answer 37: <p>Your factoring solution is correct, and you can see it's correct by plugging in the solutions into the original formula. To complete the square, the first thing we want to do is divide both sides by the coefficient on the <span class="math-container">$x^2$</span> term, which is <span class="math-container">$6$</span> for this problem. Doing this gives <span class="math-container">$x^2 + 4x = 0$</span>. Adding <span class="math-container">$4$</span> to both sides completes the square:</p> <p><span class="math-container">\begin{align} &amp;x^2 + 4x = 0 \\ &amp;x^2 + 4x + 4 = 4 \\ &amp;(x+2)^2 = 4 \end{align}</span></p> <p>To finish up, take a square root of both sides (and don't forget about the <span class="math-container">$\pm$</span> when doing this).</p> <hr> <p><strong>P.S.</strong> the quadratic formula <span class="math-container">$x = \displaystyle \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$</span> that gives the solutions to an arbitrary <span class="math-container">$ax^2 + bx + c = 0$</span> can be derived with this completing the square method.</p> Answer 38: <p>Let us start with $6x^2+24x=6(x^2+4x)$. Skip $6$ for a moment to deal with $x^2+4x$. This is a part of a square $(x+2)^2$. If one square is $x^2$, then one term of the sqyare os a sum is $x$. Because double product is $4x,$ then the second summand should be $2$. Now we have $$(x+2)^2=x^2+4x+4.$$ To get $x^2+4x$ we need to subtract $4$: $$x^2+4x=(x+2)^2-4.$$ Finally $$6x^2+24x=6\bigl((x+2)^2-4\bigr)=6(x+2)^2-24.$$</p> Answer 39: <p><strong><em>From the comments:</em></strong> Your problem is the $a$ value in $ax^2+bx+c$. When you're completing the square, the conventional method of approach is to divide both sides by $a$, to get$$ax^2+bx+c=0\implies x^2+\dfrac bax+\dfrac ca=0\tag1$$ This reduces the leading coefficent (was $a$) down to $1$, and you can continue as before. Hence,$$\begin{align*} &amp; 6x^2+24x=0\\ &amp; \implies x^2+4x=0\\ &amp; \implies x^2+4x+\left(\dfrac 42\right)^2=\left(\dfrac 42\right)^2\\ &amp; \implies (x+2)^2=4\\ &amp; \ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\end{align*}$$ And continue as before.</p> <hr> <blockquote> <p><strong>Something worth noting:</strong></p> </blockquote> <p>This is exactly how the quadratic formula can be found. Start with the general quadratic $ax^2+bx+c=0$ and divide both sides by $a$. Then complete the square, and solve for $x$.</p>
How does this converging function appear to diverge?
2,102,637
7
https://math.stackexchange.com/questions/2102637/how-does-this-converging-function-appear-to-diverge
<p>I found something while trying to analyze the integral $\displaystyle\int_0^\infty \frac{\sin(mx)} {x}\mathrm dx $</p> <p>Let us suppose we already know that the value of the integral is $\pi/2$ when $m&gt;0$, $0$ when $m=0$ and $-\pi/2$ when $m&lt;0$.</p> <p>Let $f(m)=\displaystyle\int_0^\infty \dfrac{\sin(mx)} {x} \mathrm dx$ </p> <p>Thus $f'(m)=\displaystyle\dfrac{\mathrm d}{\mathrm dm}\int_0^\infty \frac{\sin(mx)} {x} \mathrm dx=\int_0^\infty \frac{\partial}{\partial m}\dfrac{\sin(mx)} {x}\mathrm dx=\int_0^\infty \cos(mx)\mathrm dx=\lim\limits_{x \to \infty} \frac{\sin(mx)}{m}$, which does not converge.</p> <p>However, $f'(m)$ should be $0$ where $m\neq0$, as value of the integral remains constant</p> <p>So we get $0= \lim\limits_{x \to \infty} \dfrac{\sin(mx)}{m}$, where $m\neq0$</p> <p>How does this happen? Please note that I am a high school student and do not know complex analysis.</p> <p>Can we assign value $0$ to $\lim\limits_{x \to \infty}{\sin(x)}$ as the average value of Sine function remains zero even when $x$ approaches infinity?</p>
[ { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" }, { "answer_id": 2102755, "score": 5, "text": "<p>Say we have $$f(m) = \\int_0^\\infty \\sin(mx)\\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\\lim_{h\\rightarrow0}\\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x}.$$</p>\n\n<p>In order to come to the conclusion that this equals $\\int_0^\\infty\\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p>\n\n<p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href=\"https://en.wikipedia.org/wiki/Dominated_convergence_theorem\" rel=\"nofollow noreferrer\">Dominated Convergence Theorem</a> and the <a href=\"https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem\" rel=\"nofollow noreferrer\">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p>\n\n<p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p>\n\n<p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\\int_0^\\infty hx e^{-\\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\\lim_{h\\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \\ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p>\n\n<p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\\sin((m+h)x)-\\sin(mx) = \\sin(mx)\\cos(hx)+\\cos(mx)\\sin(hx) -\\sin(mx)$$ so that the integral is $$ \\int_0^\\infty\\frac{\\sin(mx)(\\cos(hx)-1) + \\cos(mx)\\sin(hx)}{hx}dx.$$</p>\n\n<p>When you take the limiting form $\\sin(hx)\\approx hx$ and $\\cos(hx)-1)\\approx \\frac{1}{2}h^2x^2$as $h\\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \\sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p>\n\n<p>Thus when we take the limit $h\\rightarrow 0$ inside the integral and get $\\int_0^\\infty \\cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p>\n\n<p>Going back to the expression $$ f'(m) = \\lim_{h\\rightarrow0} \\frac{1}{h}\\int_0^\\infty \\left(\\sin((m+h)x)-\\sin(mx)\\right)\\frac{dx}{x},$$\nwe can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p>\n\n<p>What happens in our case, is that even though the difference in $\\sin((m+h)x)-\\sin(mx)$ goes to zero like $h$ when $h\\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p>\n\n<p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}\\left(\\int_0^\\infty \\sin((m+h)x)\\frac{dx}{x} - \\int_0^\\infty \\sin(mx)\\frac{dx}{x}\\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\\int_0^\\infty \\sin(u)du/u$ which means they're the same and we have $$f'(m) = \\lim_{h\\rightarrow 0}\\frac{1}{h}*0=0,$$ which is the correct answer.</p>\n\n<p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p>\n" }, { "answer_id": 2102741, "score": 1, "text": "<p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p>\n\n<p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>\n" } ]
Question: How does this converging function appear to diverge? Answer 1: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 2: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 3: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 4: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 5: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 6: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 7: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 8: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 9: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 10: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 11: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 12: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 13: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 14: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 15: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 16: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 17: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 18: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 19: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 20: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 21: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 22: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 23: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 24: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p> Answer 25: <p>Say we have $$f(m) = \int_0^\infty \sin(mx)\frac{dx}{x}$$ and we want to compute the derivative at $m&gt;0.$ We need to take $$\lim_{h\rightarrow0}\frac{f(m+h)-f(m)}{h}. $$ Since both integrals converge, we're free to combine them, and we get $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x}.$$</p> <p>In order to come to the conclusion that this equals $\int_0^\infty\cos(x)dx,$ we need to be able to justify bringing the limit inside the integral.</p> <p>The problem of when you can bring a limit inside an integral is well-studied, and some well-known sufficient conditions for when you can are the <a href="https://en.wikipedia.org/wiki/Dominated_convergence_theorem" rel="nofollow noreferrer">Dominated Convergence Theorem</a> and the <a href="https://en.wikipedia.org/wiki/Monotone_convergence_theorem#Lebesgue.27s_monotone_convergence_theorem" rel="nofollow noreferrer">Monotone Convergence Theorem</a>. Clearly (since the answer would be wrong), this is one case where you can't. This frequently happens in oscillating integrals, but also in lots of other cases as well. In this case, if your integral is relying on cancellation from oscillations to stay finite, taking a limit inside can screw up things. </p> <p>When you take the limit inside the integral, you are assuming you can treat each point $x$ inside the integral uniformly as $h\rightarrow 0$. Sometimes, however, the global behavior of the integral is changed when you do this. This generally happens when, as $h\rightarrow 0,$ a range of $x$ that contributes significantly to the integral gets 'squeezed onto the boundary', or 'pushed off to infinity'. </p> <p>A nice example, a bit different from ours, but where it's easy to see what's going on is $\int_0^\infty hx e^{-\frac{1}{2}hx^2}dx.$ For any $h&gt;0,$ the integral is $1$, which you can verify by substitution. Thus $\lim_{h\rightarrow 0^+}$ of the integral is $1.$ However, taking the limit inside the integral gives $0$ for all $x&gt;0,$ so the integral of the limit is $0 \ne 1$. If you sketch the function for smaller and smaller values of $h$ it's easy to see what's going on. As $h\rightarrow 0^+,$ the function goes becomes a very tall spike, very close to $z=0,$ with area $1.$ When $h$ goes to zero, the spike that contains all the area gets pushed to the boundary at $x=0.$</p> <p>For our integral, oscillatory cancellations are ' pushed to infinity' in a certain sense as $h\rightarrow 0,$ so taking the limit inside causes a divergence. To see how this works, we can rewrite $$\sin((m+h)x)-\sin(mx) = \sin(mx)\cos(hx)+\cos(mx)\sin(hx) -\sin(mx)$$ so that the integral is $$ \int_0^\infty\frac{\sin(mx)(\cos(hx)-1) + \cos(mx)\sin(hx)}{hx}dx.$$</p> <p>When you take the limiting form $\sin(hx)\approx hx$ and $\cos(hx)-1)\approx \frac{1}{2}h^2x^2$as $h\rightarrow 0,$ you completely lose the fact that the functions oscillate. Thus the integral, whose convergence was controlled by this oscillating behavior at large periods $x \sim 1/h$ can have its convergence destroyed by taking the limit inside. You are letting $h\rightarrow 0$ at fixed $x$ <em>without</em> appreciating that the function's oscillations when $x$ gets above $1/h$ were the only thing making it converge in the first place. (You're pushing the oscillation period off to infinity, so it no longer actually oscillates.)</p> <p>Thus when we take the limit $h\rightarrow 0$ inside the integral and get $\int_0^\infty \cos(x)dx,$ which no longer converges, we are no longer quite so surprised.</p> <p>Going back to the expression $$ f'(m) = \lim_{h\rightarrow0} \frac{1}{h}\int_0^\infty \left(\sin((m+h)x)-\sin(mx)\right)\frac{dx}{x},$$ we can see another angle. Usually, our intuition says the integrand goes to zero like $h$ for $h\rightarrow 0,$ so we can pull the $h$ out of the integrand and be left with the derivative inside. However, as we know now, this only works when things are sufficiently uniform.</p> <p>What happens in our case, is that even though the difference in $\sin((m+h)x)-\sin(mx)$ goes to zero like $h$ when $h\rightarrow0,$ the integral is actually always zero, for any $h$ (so long as $m+h$ stays positive). </p> <p>To see this, if you go back to when the integrals were separate and write $$ f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}\left(\int_0^\infty \sin((m+h)x)\frac{dx}{x} - \int_0^\infty \sin(mx)\frac{dx}{x}\right).$$ Since we know the integrals are both convergent, we can change variables. Because of the $dx/x$ they both go to $\int_0^\infty \sin(u)du/u$ which means they're the same and we have $$f'(m) = \lim_{h\rightarrow 0}\frac{1}{h}*0=0,$$ which is the correct answer.</p> <p>Intuitively, stretching out the sine waves to get more area is exactly cancelled out by fact that the amplitude decays like $1/x$ when it stretches out.</p> Answer 26: <p>What you have just encountered is an instance when it is not possible to differentiate under the integral sign. </p> <p>There are many references where you can find conditions under which this is possible. Personally, I like the book <strong>Probability With a View Towards Statistics</strong> by Hoffman-Jorgensen, which contains a huge amount of detailed and useful measure theory results.</p>
Prove every self-complementary graph with $4n+1$ vertices has at least one vertex of degree $2n$
2,103,706
7
https://math.stackexchange.com/questions/2103706/prove-every-self-complementary-graph-with-4n1-vertices-has-at-least-one-verte
<p>I tried to prove this by induction.</p> <p>Base case $n=1$, $5$ vertices. I just drew a pentagon, which has $5$ vertices of degree $2$</p> <p>Then I assume for $n=k$,$4k+1$ vertices, there is at least one vertex with degree $2n$. The number of edges for this graph is $\dfrac{(4k+1)(4k)}{4}=(4k+1)(k)$</p> <p>Then for $n=k+1$,$4k+5$ vertices. The number of edges is $\dfrac{(4k+5)(4k+4)}{4}=(4k+5)(k+1)$</p> <p>The graph with $4k+1$ vertices have $8k+5$ more edges than the graph with $4k$ vertices.</p> <p>This is where I get stuck.Am I on the right track? How should I proceed from here?</p>
[ { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" }, { "answer_id": 2103737, "score": 4, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Complement_graph\" rel=\"nofollow noreferrer\">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p>\n\n<p>The self-complementary graph must have the same <a href=\"https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence\" rel=\"nofollow noreferrer\">degree sequence</a> as its complement, and each vertex has its degree change from $\\text{deg}(v)$ to $4n{-}\\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p>\n\n<p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\\text{deg}(v) = 4n{-}\\text{deg}(v)$. Which means for at least that vertex, $\\text{deg}(v) = 2n$.</p>\n\n<hr>\n\n<p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href=\"http://mathworld.wolfram.com/Self-ComplementaryGraph.html\" rel=\"nofollow noreferrer\">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p>\n\n<p><a href=\"https://i.stack.imgur.com/6soM9.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/6soM9.jpg\" alt=\"enter image description here\"></a></p>\n\n<p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\\color{red}{2},1,1)$.</p>\n" }, { "answer_id": 2103907, "score": 1, "text": "<p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\\sigma:G\\to\\overline G$ be an isomorphism. Since $G$ and $\\overline G$ have the same vertex set $V,$ $\\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\\overline G$ and vice versa.</p>\n\n<p>Consider the cycle decomposition of the permutation $\\sigma.$ (Note that we are using the word \"cycle\" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\\sigma$ must be of even length, since edges of $G$ alternate with edges of $\\overline G$ as we go around the cycle. Since $|V|$ is even, $\\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p>\n\n<p>Let $v$ be a fixed point of $\\sigma.$ Since $\\sigma$ is an isomorphism between $G$ and $\\overline G,$ we must have $\\deg_G(v)=deg_{\\overline G}(v).$ Since $\\deg_G(v)+\\deg_{\\overline G}(v)=4n,$ it follows that $\\deg_G(v)=\\deg_{\\overline G}(v)=2n.$</p>\n\n<p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p>\n\n<p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>\n" } ]
Question: Prove every self-complementary graph with $4n+1$ vertices has at least one vertex of degree $2n$ Answer 1: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 2: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 3: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 4: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 5: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 6: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 7: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 8: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 9: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 10: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 11: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 12: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 13: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 14: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 15: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 16: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 17: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 18: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 19: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 20: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 21: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 22: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 23: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 24: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p> Answer 25: <p>The <a href="https://en.wikipedia.org/wiki/Complement_graph" rel="nofollow noreferrer">complementary graph</a> to $G$ adds all edges that are missing from the complete graph $K_{4n+1}$, and removes all existing edges. In $K_{4n+1}$ every vertex has $4n$ edges, so the complementing process changes each vertex degree from $x$ to $4n-x$.</p> <p>The self-complementary graph must have the same <a href="https://en.wikipedia.org/wiki/Degree_(graph_theory)#Degree_sequence" rel="nofollow noreferrer">degree sequence</a> as its complement, and each vertex has its degree change from $\text{deg}(v)$ to $4n{-}\text{deg}(v)$ in the switch to the complement - so there must also be a vertex of that degree in the original graph to switch back. This corresponds to switching between an early position in the degree sequence and a late one.</p> <p>So at the middle position $2n{+}1$ in the degree sequence, we must have a vertex that switches value with itself; that is, $\text{deg}(v) = 4n{-}\text{deg}(v)$. Which means for at least that vertex, $\text{deg}(v) = 2n$.</p> <hr> <p>As an aside, you might also like to consider the other $5$-vertex self-complementary graph (apart from the $5$-cycle) in any proof you attempt: (from <a href="http://mathworld.wolfram.com/Self-ComplementaryGraph.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Self-ComplementaryGraph.html</a>):</p> <p><a href="https://i.stack.imgur.com/6soM9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6soM9.jpg" alt="enter image description here"></a></p> <p>Here $n=1$ of course and the middle position of the degree sequence is $3$ - and we see the sequence of $(3,3,\color{red}{2},1,1)$.</p> Answer 26: <p>Let $G$ be a self-complementary graph of order $4n+1$ and let $\sigma:G\to\overline G$ be an isomorphism. Since $G$ and $\overline G$ have the same vertex set $V,$ $\sigma$ is a permutation of $V,$ an anti-automorphism of $G,$ mapping edges of $G$ to edges of $\overline G$ and vice versa.</p> <p>Consider the cycle decomposition of the permutation $\sigma.$ (Note that we are using the word "cycle" in the group theory sense, not the graph theory sense!) Each nontrivial cycle of $\sigma$ must be of even length, since edges of $G$ alternate with edges of $\overline G$ as we go around the cycle. Since $|V|$ is even, $\sigma$ must have a $1$-cycle, i.e., a fixed point. (It's clear that the fixed point is unique, but we don't need that.)</p> <p>Let $v$ be a fixed point of $\sigma.$ Since $\sigma$ is an isomorphism between $G$ and $\overline G,$ we must have $\deg_G(v)=deg_{\overline G}(v).$ Since $\deg_G(v)+\deg_{\overline G}(v)=4n,$ it follows that $\deg_G(v)=\deg_{\overline G}(v)=2n.$</p> <p><strong>P.S.</strong> It's also easy to see that the length of a nontrivial cycle of $\sigma$ is not only even, it's a multiple of $4;$ and if one vertex in a cycle has degree $2n,$ then all vertices in that cycle have degree $2n.$ Since $\sigma$ has a unique fixed point, it follows that <strong>the number of vertices of degree $2n$ is congruent to $1$ modulo $4.$</strong></p> <p><strong>Examples.</strong> There are two self-complementary graphs of order $5.$ One of them is $C_5,$ which has five vertices of degree $2;$ the other is homeomorphic to the letter A and has one vertex of degree $2.$ </p>
Functional differential equation $D&#39;(t) = 2 D(2t) - D(t)$
2,103,792
7
https://math.stackexchange.com/questions/2103792/functional-differential-equation-dt-2-d2t-dt
<p>Can the functional differential equation $$D'(t) = 2 D(2t) - D(t)$$ with initial condition $D(0)=0$ be solved in closed form? A Laplace transform yields the functional equation $$ (s+1)d(s) = d(s/2) $$ but this equation seems badly behaved ($d(0)$ indeterminate).</p>
[ { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" }, { "answer_id": 2103834, "score": 6, "text": "<p>There are series solutions</p>\n\n<p>$$ \\eqalign{d(t) &amp;= c \\left( e^{-t} - 2 e^{-2t} + \\frac{4}{3} e^{-4t} - \\frac{8}{21} e^{-8t} + \\ldots\\right)\\cr\n &amp;= c \\sum_{k=0}^\\infty \\frac{(-2)^k}{ \\prod_{j=1}^k (2^j-1)} e^{-2^kt}\\cr}$$</p>\n\n<p>which all have $d(0) = 0$. These should converge for $t \\ge 0$.</p>\n" }, { "answer_id": 2103835, "score": 3, "text": "<p>Let us relax the initial condition and look at a solution of the form</p>\n\n<p>$$ D(t) = \\sum_{k=-\\infty}^{\\infty} a_k \\exp(-2^k t). $$</p>\n\n<p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href=\"https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series\" rel=\"nofollow noreferrer\"><em>$q$-Pochhammer symbol</em></a>:</p>\n\n<p>$$ a_k = \\begin{cases} \\dfrac{2^k}{(2,2)_k} a_0, &amp; k \\geq 0 \\\\ 0, &amp; k &lt; 0 \\end{cases} $$</p>\n\n<p>Consequently we can write</p>\n\n<p>$$ D(t) = a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k}\\exp(-2^k t) $$</p>\n\n<p>for some constant $a$.</p>\n\n<p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p>\n\n<p>\\begin{align*}\nD(0)\n&amp;= a \\sum_{k=0}^{\\infty} \\frac{2^k}{(2;2)_k} \\\\\n&amp;= a \\left(1 + \\sum_{k=1}^{\\infty} \\frac{2^k - 1}{(2;2)_k} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= a \\left(1 - \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_{k-1}} + \\sum_{k=1}^{\\infty} \\frac{1}{(2;2)_k} \\right) \\\\\n&amp;= 0.\n\\end{align*}</p>\n" } ]
Question: Functional differential equation $D&#39;(t) = 2 D(2t) - D(t)$ Answer 1: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 2: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 3: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 4: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 5: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 6: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 7: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 8: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 9: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 10: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 11: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 12: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 13: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 14: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 15: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 16: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 17: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 18: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 19: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 20: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 21: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 22: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 23: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 24: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p> Answer 25: <p>There are series solutions</p> <p>$$ \eqalign{d(t) &amp;= c \left( e^{-t} - 2 e^{-2t} + \frac{4}{3} e^{-4t} - \frac{8}{21} e^{-8t} + \ldots\right)\cr &amp;= c \sum_{k=0}^\infty \frac{(-2)^k}{ \prod_{j=1}^k (2^j-1)} e^{-2^kt}\cr}$$</p> <p>which all have $d(0) = 0$. These should converge for $t \ge 0$.</p> Answer 26: <p>Let us relax the initial condition and look at a solution of the form</p> <p>$$ D(t) = \sum_{k=-\infty}^{\infty} a_k \exp(-2^k t). $$</p> <p>Plugging this to the equation and assuming that term-wise differentiation applies, we find that $(a_k)$ satisfies the recurrence relation $ (1-2^k) a_k = 2a_{k-1}$, whose solution can be written in terms of the <a href="https://en.wikipedia.org/wiki/Q-Pochhammer_symbol#q-series" rel="nofollow noreferrer"><em>$q$-Pochhammer symbol</em></a>:</p> <p>$$ a_k = \begin{cases} \dfrac{2^k}{(2,2)_k} a_0, &amp; k \geq 0 \\ 0, &amp; k &lt; 0 \end{cases} $$</p> <p>Consequently we can write</p> <p>$$ D(t) = a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k}\exp(-2^k t) $$</p> <p>for some constant $a$.</p> <p><em>p.s.</em> As pointed out by <strong>Robert Israel</strong>, these solutions indeed satisfy the initial condition $D(0) = 0$ (which I did not expect before looking at his answer). Here is a simple confirmation:</p> <p>\begin{align*} D(0) &amp;= a \sum_{k=0}^{\infty} \frac{2^k}{(2;2)_k} \\ &amp;= a \left(1 + \sum_{k=1}^{\infty} \frac{2^k - 1}{(2;2)_k} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= a \left(1 - \sum_{k=1}^{\infty} \frac{1}{(2;2)_{k-1}} + \sum_{k=1}^{\infty} \frac{1}{(2;2)_k} \right) \\ &amp;= 0. \end{align*}</p>
Homework problem for a first grader on $9$gag
2,105,267
7
https://math.stackexchange.com/questions/2105267/homework-problem-for-a-first-grader-on-9gag
<p>I saw the below image on 9gag. And naturally I asked myself how many patterns can one find (and justify)? Are there any "fun" patterns, using your "first grader imagination" and your mathematical skills?</p> <p><a href="https://i.stack.imgur.com/j1pgp.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j1pgp.jpg" alt="enter image description here"></a></p>
[ { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" }, { "answer_id": 2105282, "score": 14, "text": "<p>J makes sense for a firstgrader</p>\n\n<p>this way both diagonals are 40</p>\n\n<p>I don't think such tasks are helpful in any way.</p>\n" }, { "answer_id": 2105358, "score": 7, "text": "<p>$23 \\times 6 \\times 11 = 1518$</p>\n\n<p>$20 \\times 6 \\times \\left(12 + \\frac{13}{20}\\right) = 1518$</p>\n\n<p>Thus, the answer is $\\left(\\text{H} + \\frac{\\text{I}}{\\text{P}}\\right)$</p>\n" }, { "answer_id": 2105296, "score": 5, "text": "<p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p>\n" }, { "answer_id": 2105278, "score": 4, "text": "<p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p>\n\n<p>Edited</p>\n" }, { "answer_id": 2105295, "score": 3, "text": "<p>For instance $23=2\\cdot 6 + 11$ so with $?=8$ we would have $8+2\\cdot 6=20$. </p>\n\n<p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p>\n" }, { "answer_id": 2105552, "score": 1, "text": "<p>The answer J doesn't seem as indefensible to me as the question implies. </p>\n\n<p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p>\n\n<p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p>\n\n<p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p>\n" }, { "answer_id": 2105548, "score": 0, "text": "<p>Both diagonals summed is 20, Blank => J = 14. </p>\n\n<p>Top and bot row, subtracted left to right.\n2(23-20)=6.\n2(J-11)=6. J = 14 again.</p>\n" } ]
Question: Homework problem for a first grader on $9$gag Answer 1: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 2: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 3: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 4: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 5: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 6: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 7: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 8: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 9: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 10: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 11: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 12: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 13: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 14: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 15: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 16: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 17: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 18: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 19: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 20: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 21: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 22: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 23: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 24: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 25: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 26: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 27: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 28: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 29: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 30: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 31: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 32: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 33: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 34: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 35: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 36: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 37: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 38: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 39: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 40: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 41: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 42: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 43: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 44: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 45: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 46: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 47: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 48: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 49: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 50: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 51: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 52: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 53: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 54: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 55: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 56: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 57: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 58: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 59: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 60: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 61: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 62: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 63: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 64: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 65: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 66: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 67: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 68: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 69: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 70: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 71: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 72: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 73: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 74: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 75: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 76: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 77: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 78: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 79: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 80: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 81: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 82: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 83: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 84: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p> Answer 85: <p>J makes sense for a firstgrader</p> <p>this way both diagonals are 40</p> <p>I don't think such tasks are helpful in any way.</p> Answer 86: <p>$23 \times 6 \times 11 = 1518$</p> <p>$20 \times 6 \times \left(12 + \frac{13}{20}\right) = 1518$</p> <p>Thus, the answer is $\left(\text{H} + \frac{\text{I}}{\text{P}}\right)$</p> Answer 87: <p>The digits of $23$ sum to $5$, and $6 + 5 = 11$. The digits of $20$ sum to $2$, so one answer is $8 = D$. </p> Answer 88: <p>I guess it is is $J = 14$. Indeed, the sum of the diagonal GBS is $40$. If you add $J$, then also the sum of the other diagonal (JBP) is $40$.</p> <p>Edited</p> Answer 89: <p>For instance $23=2\cdot 6 + 11$ so with $?=8$ we would have $8+2\cdot 6=20$. </p> <p>And $23-11-6=6$ and $20-6-8=6$ so $?=8$ could work out this way as well.</p> Answer 90: <p>The answer J doesn't seem as indefensible to me as the question implies. </p> <p>Plugging these numbers in makes the two diagonal sums match, as others have pointed out. However, it also makes the two horizontal differences match (J-G=3, and S-O=3) and the two vertical differences match (S-J=9, O-G=9). </p> <p>That's three patterns all satisfied by one number, which is pretty pattern-y from non-mathematician's point of view.</p> <p>I do agree that it doesn't seem to have much pedagogical value as an exercise for small children.</p> Answer 91: <p>Both diagonals summed is 20, Blank => J = 14. </p> <p>Top and bot row, subtracted left to right. 2(23-20)=6. 2(J-11)=6. J = 14 again.</p>
Does $x^{-1}$ have -1 zeroes?
2,106,084
7
https://math.stackexchange.com/questions/2106084/does-x-1-have-1-zeroes
<p>I asked <a href="https://math.stackexchange.com/questions/2105588/how-can-y-x4-1-have-4-zeros">How $y=x^4+1$ could have 4 zeroes</a> yesterday, and figured it out on my own pretty quickly. I did some more thinking since then, and realized that according to the fundamental theorem of algebra, $0=x^{-1}$ has -1 zeroes (solutions?), because $-1$ is the largest exponent.</p> <p>Wouldn't it have $1$ zeroes though, because $\sqrt[-1]{0} = 0^{\frac{1}{-1}} = 0$? Or does it equal undefined? Or something else? Is there something important I'm missing, or does the question not make sense at all?</p>
[ { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" }, { "answer_id": 2106089, "score": 15, "text": "<p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p>\n\n<p>$x^{-1}$ is an element of the field of fractions of $\\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href=\"https://en.wikipedia.org/wiki/Laurent_polynomial\">Laurent polynomial</a>, however.</p>\n\n<p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has \"degree\" $2$ when you multiply it out, and yet it clearly has three roots.</p>\n" }, { "answer_id": 2106118, "score": 8, "text": "<p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p>\n\n<p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a \"pole\" there.</p>\n\n<p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p>\n\n<p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p>\n\n<p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p>\n\n<p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p>\n\n<p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p>\n" }, { "answer_id": 2106103, "score": 2, "text": "<p>Let us consider the roots of a generalized polynomial equation such as</p>\n\n<p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p>\n\n<p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p>\n\n<p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p>\n\n<p>which has five roots.</p>\n\n<p>For such \"polynomials\", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p>\n\n<hr>\n\n<p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>\n" } ]
Question: Does $x^{-1}$ have -1 zeroes? Answer 1: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 2: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 3: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 4: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 5: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 6: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 7: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 8: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 9: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 10: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 11: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 12: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 13: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 14: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 15: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 16: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 17: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 18: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 19: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 20: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 21: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 22: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 23: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 24: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 25: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 26: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 27: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 28: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 29: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 30: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 31: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 32: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 33: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 34: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 35: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 36: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p> Answer 37: <p>The fundamental theorem of algebra deals with <em>polynomials</em> only (elements of $\mathbb C[x]$, usually), and $x^{-1}$ is not a polynomial.</p> <p>$x^{-1}$ is an element of the field of fractions of $\mathbb C[x]$, but it is not properly considered a polynomial itself. It can be called a <a href="https://en.wikipedia.org/wiki/Laurent_polynomial">Laurent polynomial</a>, however.</p> <p>Trying to use the fundamental theorem of algebra with Laurent polynomials is unproductive. For example, $x^{-1}(x-1)(x-2)(x-3)$ has "degree" $2$ when you multiply it out, and yet it clearly has three roots.</p> Answer 38: <p>This is a great question! As rschweib explained, you cannot literally apply the fundamental theorem of algebra to things which aren't polynomials. However, there is a souped-up version which applies to any ratio of two polynomials, such as $x^{-1}=1/x$. </p> <p>Say you are interested in the function $f(x)=p(x)/q(x)$, where, for simplicity, let's suppose we wrote $p$ and $q$ without common factors. The zeros of $p$ are zeros of $f$, while the zeros of $q$ are places where $f$ is singular. We say $f$ has a "pole" there.</p> <p>The formula is very simple: for any rational function, the number of zeros, counted with multiplicity, equals the number of poles, counted with multiplicity. </p> <p>If you are paying attention, though, you will see that that doesn't seem to agree with our experience, since polynomials have lots of zeros and no poles -- or so it appears!</p> <p>To make the formula work out correctly, you should also count infinity as a possible zero or pole. The order of the pole of $f$ at infinity is the degree of $p$ minus the degree of $q$. If the order is negative, then it counts as negative that many zeros. </p> <p>So $1/x$, for example, has a pole of order 1 at 0, and a pole of order -1 (i.e. a zero of order 1) at infinity. And 1=1, as predicted. </p> <p>Calling the behaviour at infinity a zero or a pole really does make pretty good sense: in the example of $1/x$, you can see that as $x$ goes to infinity, $f(x)$ really does approach zero, while for a polynomial, as $x$ goes to infinity, $f(x)$ becomes very large in absolute value, so it makes sense to think of it as a pole. </p> Answer 39: <p>Let us consider the roots of a generalized polynomial equation such as</p> <p>$$ax^3+bx^2+cx+d+ex^{-1}+fx^{-2}=0.$$</p> <p>$x=0$ does not belong to the domain, so that we can multiply by $x^2$ and get an ordinary polynomial equation:</p> <p>$$ax^5+bx^4+cx^3+dx^2+ex+f=0$$</p> <p>which has five roots.</p> <p>For such "polynomials", the number of roots is the algebraic difference of the highest and the lowest (negative) degrees ($3-(-2)=5$).</p> <hr> <p>This reasoning also works with $x^{-1}$, as the difference between the extreme degrees is $0$, showing that no root is possible ($x^{-1}=0$ is turned into $1=0$).</p>
Evaluation of $\int\frac{1}{x+ \sqrt{x^2-x+1}}dx$
2,106,937
7
https://math.stackexchange.com/questions/2106937/evaluation-of-int-frac1x-sqrtx2-x1dx
<p>Evaluate :</p> <p><span class="math-container">$$\int\frac{1}{x+ \sqrt{x^2-x+1}}dx$$</span></p> <p>After multiplying the denominator with <span class="math-container">$x-\sqrt{x^2-x+1}$</span>, I get</p> <p><span class="math-container">$$x+\ln |x-1|-\int \frac{\sqrt{x^2-x+1}}{x-1}dx$$</span></p> <p>Is <span class="math-container">$\int \frac{\sqrt{x^2-x+1}}{x-1}dx$</span> is integrable in terms of elementary functions?</p>
[ { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" }, { "answer_id": 2106949, "score": 4, "text": "<p>We have $$I =\\int \\frac {\\sqrt {x^2-x+1}}{x-1} dx =\\frac {1}{2 }\\int \\frac {\\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\\frac {1}{2} \\int \\frac {\\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\\sqrt {3} \\operatorname {sinh}(v)$ and simplifying gives us $$I =\\frac {3}{2} \\int \\frac {(\\operatorname {cosh }(v))^2}{\\sqrt {3}\\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href=\"https://en.wikipedia.org/wiki/Tangent_half-angle_substitution\" rel=\"nofollow noreferrer\">Weierstrass substitution</a>. Hope it helps. </p>\n" }, { "answer_id": 2106984, "score": 1, "text": "<p>There is a substitution, called the <a href=\"https://en.wikipedia.org/wiki/Euler_substitution\" rel=\"nofollow noreferrer\">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \\sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \\frac{t^2 - 1}{2t - 1}$. In this case, $\\frac{dt}{dx} = \\frac{1}{\\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p>\n\n<p>\\begin{eqnarray} \\int \\frac{dx}{x + \\sqrt{x^2 - x + 1}} = \\int (\\frac{1}{x + \\sqrt{x^2 - x + 1}}\\cdot\\frac{1}{\\frac{dt}{dx}})\\frac{dt}{dx}dx \\\\= \\int \\frac{1}{t}\\cdot\\frac{1}{[\\frac{1}{t - (t^2-1)/(2t-1)}\\cdot(2(\\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \\dots \\\\ \\dots = \\int\\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\\end{eqnarray}</p>\n\n<p>which is quite easy to solve directly by partial fractions.</p>\n" }, { "answer_id": 4672775, "score": 1, "text": "<p>The modified Euler sub <span class=\"math-container\">$t=\\sqrt{x^2-x+1}-x+\\frac12$</span> is best suited here</p>\n<p><span class=\"math-container\">\\begin{align}\n&amp;\\int\\frac{1}{x+ \\sqrt{x^2-x+1}}\\ dx\\\\\n=&amp; -\\int \\frac{3+4t^2}{2t(3+2t)}\\ dt=-t-\\frac12\\ln t+2 \\ln(2t+3)+C\n\\end{align}</span></p>\n" } ]
Question: Evaluation of $\int\frac{1}{x+ \sqrt{x^2-x+1}}dx$ Answer 1: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 2: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 3: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 4: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 5: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 6: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 7: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 8: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 9: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 10: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 11: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 12: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 13: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 14: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 15: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 16: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 17: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 18: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 19: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 20: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 21: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 22: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 23: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 24: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 25: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 26: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 27: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 28: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 29: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 30: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 31: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 32: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 33: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 34: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 35: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 36: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p> Answer 37: <p>We have $$I =\int \frac {\sqrt {x^2-x+1}}{x-1} dx =\frac {1}{2 }\int \frac {\sqrt {(2x-1)^2+3}}{x-1} dx $$ Substituting $u =2x-1$, we get, $$I =\frac {1}{2} \int \frac {\sqrt{u^2+3}}{u-1} du $$ Now perform hyperbolic substitution $u =\sqrt {3} \operatorname {sinh}(v)$ and simplifying gives us $$I =\frac {3}{2} \int \frac {(\operatorname {cosh }(v))^2}{\sqrt {3}\operatorname{sinh}(v) -1} dv $$ which can be solved by <a href="https://en.wikipedia.org/wiki/Tangent_half-angle_substitution" rel="nofollow noreferrer">Weierstrass substitution</a>. Hope it helps. </p> Answer 38: <p>There is a substitution, called the <a href="https://en.wikipedia.org/wiki/Euler_substitution" rel="nofollow noreferrer">Euler substitution</a>, which solves your integral rather neatly. In your case we set $t = \sqrt{x^2 - x + 1} + x$; after subtracting $x$ from both sides and squaring, we get $x^2 -x + 1 = t^2 - 2xt + x^2$, which can be solved (after subtracting $x^2$ from both sides) to get $x = \frac{t^2 - 1}{2t - 1}$. In this case, $\frac{dt}{dx} = \frac{1}{\sqrt{x^2 - x + 1}}(2x-1) + 1$, and so </p> <p>\begin{eqnarray} \int \frac{dx}{x + \sqrt{x^2 - x + 1}} = \int (\frac{1}{x + \sqrt{x^2 - x + 1}}\cdot\frac{1}{\frac{dt}{dx}})\frac{dt}{dx}dx \\= \int \frac{1}{t}\cdot\frac{1}{[\frac{1}{t - (t^2-1)/(2t-1)}\cdot(2(\frac{t^2-1}{2t-1}) - 1)] + 1} dt = \dots \\ \dots = \int\frac{t^2 - t + 1}{3t^3 - 3t^2} dt\end{eqnarray}</p> <p>which is quite easy to solve directly by partial fractions.</p> Answer 39: <p>The modified Euler sub <span class="math-container">$t=\sqrt{x^2-x+1}-x+\frac12$</span> is best suited here</p> <p><span class="math-container">\begin{align} &amp;\int\frac{1}{x+ \sqrt{x^2-x+1}}\ dx\\ =&amp; -\int \frac{3+4t^2}{2t(3+2t)}\ dt=-t-\frac12\ln t+2 \ln(2t+3)+C \end{align}</span></p>
Is the interior of a smooth manifold with boundary connected?
2,107,145
7
https://math.stackexchange.com/questions/2107145/is-the-interior-of-a-smooth-manifold-with-boundary-connected
<p>Let $M$ be a smooth connected manifold with boundary. Is it true that the interior of $M$ is also connected? (As usual I am assuming $M$ is second-countable and Hausdorff).</p> <p>I am trying to rule out something like two "tangent disks" (or circles) where the (topological) interior is obviously not connected. This case is not a counter-example since the union of two tangent disks (or circles) is not a manifold with boundary (the point of tangency is pathological).</p>
[ { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" }, { "answer_id": 2107228, "score": 7, "text": "<p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p>\n" }, { "answer_id": 2107934, "score": 7, "text": "<p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\\mathbb{H}^n\\to U\\subset M$ be a coordinate chart around $p$, where $\\mathbb{H}^n$ is the upper half of $\\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\\mathbb{H}^n\\setminus \\partial\\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>\n" } ]
Question: Is the interior of a smooth manifold with boundary connected? Answer 1: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 2: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 3: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 4: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 5: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 6: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 7: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 8: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 9: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 10: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 11: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 12: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 13: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 14: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 15: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 16: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 17: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 18: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 19: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 20: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 21: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 22: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 23: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 24: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p> Answer 25: <p>This is true because a manifold is homotopically equivalent to its interior (this follows from the collar neighborhood theorem)</p> Answer 26: <p>Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.</p>
Understanding the limit in $\mathbb{Z_p}$
2,107,186
7
https://math.stackexchange.com/questions/2107186/understanding-the-limit-in-mathbbz-p
<p>Let $A_n = \mathbb{Z}/p^n \mathbb{Z}$ . We define the ring of p-adic integers $\mathbb{Z_p}$ as $$\mathbb{Z_p} := \lim_{\longleftarrow} (A_n , \phi_n)$$ where $\phi_n : A_n \rightarrow A_{n-1}$ is a homomorphism. </p> <p>It's called <strong>projective limit</strong> of the system $(A_n , \phi_n)$. Any element of $\mathbb{Z_p}$ is a sequence $x = (... , x_n, ...,x_1)$ with $x_n \in A_n$ and $\phi_n(x_n) = x_{n-1}$ for $n &gt;1$. </p> <p>I do not understand what kind of limit is defined for $\mathbb{Z_p}$. Can someone please explain to me this notation?</p>
[ { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" }, { "answer_id": 2107218, "score": 6, "text": "<p>I'll examine in some detail what it means from an example.</p>\n\n<p>An element in <span class=\"math-container\">$\\mathbf Z_p$</span> is formally a sequence <span class=\"math-container\">$(x_1,x_2,\\dots,x_n,\\dots)$</span>, where each <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span>, satisfying the condition that the canonical image of <span class=\"math-container\">$x_n\\in \\mathbf Z/p^n\\mathbf Z$</span> in <span class=\"math-container\">$\\mathbf Z/p^{n-1}\\mathbf Z$</span> is <span class=\"math-container\">$x_{n-1}$</span>.</p>\n\n<p>First consider <span class=\"math-container\">$x_1\\in\\mathbf Z/p\\mathbf Z$</span>: this congruence class is represented by an integer <span class=\"math-container\">$a_0\\in\\{0,1,\\dots,p-1\\}$</span>. </p>\n\n<p>Next <span class=\"math-container\">$x_2\\in\\mathbf Z/p^2\\mathbf Z$</span>: it is represented by an integer in <span class=\"math-container\">$\\{0,\\dots,p-1,p,\\dots,p^2-1\\}$</span>, which we can write, by Euclidean division, <span class=\"math-container\">$y_2p+r_2, \\enspace 0\\le r_2&lt;p-1$</span>. As <span class=\"math-container\">$\\phi_2(x_2)=x_1$</span>, this implies <span class=\"math-container\">$r_2=a_0$</span>, and any two values of <span class=\"math-container\">$y_2p$</span> are congruent modulo <span class=\"math-container\">$p^2$</span>, hence any two values of <span class=\"math-container\">$y_2$</span> are congruent modulo <span class=\"math-container\">$p$</span>. Hence <span class=\"math-container\">$y_2$</span> has a unique representative <span class=\"math-container\">$a_1$</span> in <span class=\"math-container\">$\\{0,\\dots, p-1\\}$</span>, and <span class=\"math-container\">$x_2$</span> is represented by the integer:\n<span class=\"math-container\">$$a_0+a_1p,\\quad 0\\le a_0,a_1\\le p-1.$$</span></p>\n\n<p>More generally, an easy induction shows that is <span class=\"math-container\">$x_n\\in\\mathbf Z/p^n\\mathbf Z$</span> represented by the integer\n<span class=\"math-container\">$$\\sum_{i=0}^{n-1}a_ip^i,\\quad 0\\le a_0,a_1,\\dots, a_{i-1}\\le p-1,$$</span>\nand the <span class=\"math-container\">$p$</span>-adic integer <span class=\"math-container\">$x$</span> is represented the infinite series:\n<span class=\"math-container\">$$x=\\sum_{i=0}^{\\infty}a_ip^i,\\quad \\forall i,\\;0\\le a_i\\le p-1.$$</span>\nThis is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class=\"math-container\">$\\mathbf R$</span>, but for the metric deduced from the <span class=\"math-container\">$p$</span>-adic valuation.</p>\n" }, { "answer_id": 2107207, "score": 2, "text": "<p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p>\n\n<ul>\n<li> A preordered indexing set <span class=\"math-container\">$(I,\\leq)$</span> right directed i.e. \n<span class=\"math-container\">$$\n(\\forall \\alpha,\\beta\\in I)(\\exists \\gamma\\in I)(\\alpha\\leq \\gamma\\mbox{ and }\\beta\\leq \\gamma)\n$$</span> \n<li> Families <span class=\"math-container\">$(R_\\alpha)_{\\alpha\\in I}$</span> <span class=\"math-container\">$(f_{\\alpha,\\beta})_{\\alpha,\\beta\\in I\\atop \\alpha\\leq \\beta}$</span> where <span class=\"math-container\">$R_\\alpha$</span> are rings (in your case <span class=\"math-container\">$A_n$</span>) and <span class=\"math-container\">$f_{\\alpha,\\beta}\\in \\operatorname{Hom}_\\text{ring}(R_\\beta,R_\\alpha)$</span>\n</ul>\n\n<p>The limit is the set of families (in your case sequences) <span class=\"math-container\">$(x_\\alpha)_{\\alpha\\in I}$</span> such that, for all <span class=\"math-container\">$\\alpha\\leq \\beta$</span> one has <span class=\"math-container\">$f_{\\alpha,\\beta}(x_\\beta)=x_\\alpha$</span>.<br>\nNote that, in your case, <span class=\"math-container\">$\\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>\n" } ]
Question: Understanding the limit in $\mathbb{Z_p}$ Answer 1: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 2: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 3: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 4: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 5: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 6: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 7: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 8: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 9: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 10: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 11: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 12: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 13: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 14: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 15: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 16: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 17: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 18: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 19: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 20: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 21: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 22: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 23: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 24: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p> Answer 25: <p>I'll examine in some detail what it means from an example.</p> <p>An element in <span class="math-container">$\mathbf Z_p$</span> is formally a sequence <span class="math-container">$(x_1,x_2,\dots,x_n,\dots)$</span>, where each <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span>, satisfying the condition that the canonical image of <span class="math-container">$x_n\in \mathbf Z/p^n\mathbf Z$</span> in <span class="math-container">$\mathbf Z/p^{n-1}\mathbf Z$</span> is <span class="math-container">$x_{n-1}$</span>.</p> <p>First consider <span class="math-container">$x_1\in\mathbf Z/p\mathbf Z$</span>: this congruence class is represented by an integer <span class="math-container">$a_0\in\{0,1,\dots,p-1\}$</span>. </p> <p>Next <span class="math-container">$x_2\in\mathbf Z/p^2\mathbf Z$</span>: it is represented by an integer in <span class="math-container">$\{0,\dots,p-1,p,\dots,p^2-1\}$</span>, which we can write, by Euclidean division, <span class="math-container">$y_2p+r_2, \enspace 0\le r_2&lt;p-1$</span>. As <span class="math-container">$\phi_2(x_2)=x_1$</span>, this implies <span class="math-container">$r_2=a_0$</span>, and any two values of <span class="math-container">$y_2p$</span> are congruent modulo <span class="math-container">$p^2$</span>, hence any two values of <span class="math-container">$y_2$</span> are congruent modulo <span class="math-container">$p$</span>. Hence <span class="math-container">$y_2$</span> has a unique representative <span class="math-container">$a_1$</span> in <span class="math-container">$\{0,\dots, p-1\}$</span>, and <span class="math-container">$x_2$</span> is represented by the integer: <span class="math-container">$$a_0+a_1p,\quad 0\le a_0,a_1\le p-1.$$</span></p> <p>More generally, an easy induction shows that is <span class="math-container">$x_n\in\mathbf Z/p^n\mathbf Z$</span> represented by the integer <span class="math-container">$$\sum_{i=0}^{n-1}a_ip^i,\quad 0\le a_0,a_1,\dots, a_{i-1}\le p-1,$$</span> and the <span class="math-container">$p$</span>-adic integer <span class="math-container">$x$</span> is represented the infinite series: <span class="math-container">$$x=\sum_{i=0}^{\infty}a_ip^i,\quad \forall i,\;0\le a_i\le p-1.$$</span> This is somewhat like the infinite decimal expansion of a real number, but in the other direction. Of course, it is not convergent for the usual topology induced by <span class="math-container">$\mathbf R$</span>, but for the metric deduced from the <span class="math-container">$p$</span>-adic valuation.</p> Answer 26: <p>This is a particular (and concrete) example or <em>Inverse Limit of Rings</em> (Bourbaki Algebra Ch 1 § 10 nb. 1). Take an inverse system of rings that is the data of </p> <ul> <li> A preordered indexing set <span class="math-container">$(I,\leq)$</span> right directed i.e. <span class="math-container">$$ (\forall \alpha,\beta\in I)(\exists \gamma\in I)(\alpha\leq \gamma\mbox{ and }\beta\leq \gamma) $$</span> <li> Families <span class="math-container">$(R_\alpha)_{\alpha\in I}$</span> <span class="math-container">$(f_{\alpha,\beta})_{\alpha,\beta\in I\atop \alpha\leq \beta}$</span> where <span class="math-container">$R_\alpha$</span> are rings (in your case <span class="math-container">$A_n$</span>) and <span class="math-container">$f_{\alpha,\beta}\in \operatorname{Hom}_\text{ring}(R_\beta,R_\alpha)$</span> </ul> <p>The limit is the set of families (in your case sequences) <span class="math-container">$(x_\alpha)_{\alpha\in I}$</span> such that, for all <span class="math-container">$\alpha\leq \beta$</span> one has <span class="math-container">$f_{\alpha,\beta}(x_\beta)=x_\alpha$</span>.<br> Note that, in your case, <span class="math-container">$\phi_n$</span> is not arbitrary. It is the canonical morphism. </p>
What do the Hurwitz quaternions have to do with the Hurwitz quaternion order?
2,108,071
7
https://math.stackexchange.com/questions/2108071/what-do-the-hurwitz-quaternions-have-to-do-with-the-hurwitz-quaternion-order
<p>The <strong>Hurwitz quaternions</strong> are the ring formed by the elements of the form $w+xi+yj+zij$ where $i^2=j^2=-1$, $ij=-ji$, and where $w,x,y,z$ are either all integers or all half-integers. These form a maximal order of the quaternion algebra $\Big(\frac{-1,-1}{\mathbb{Q}}\Big)$.</p> <p>The <strong>Hurwitz quaternion order</strong>, on the other hand, is defined as follows (according to Wikipedia). Let $\rho$ be the primitive seventh root of unity and let $K$ be the maximal real subfield of $\mathbb{Q}(\rho)$. Let $\eta=2\cos(\frac{2\pi}{7})$ (so that $\mathbb{Z}[\eta]$ is the ring of integers of $K$) and consider the quaternion algebra $\Big(\frac{\eta,\eta}{K}\Big)$ (where $i^2=j^2=\eta$). Then let $\tau=1+\eta+\eta^2$ and $j'=\frac{1+\eta i+\tau j}{2}$, and the Hurwitz quaternion order is the maximal order $\mathbb{Z}[\eta][i,j,j']$ in $\Big(\frac{\eta,\eta}{K}\Big)$.</p> <p>It seems that the Hurwitz quaternion order should be some sort of generalization of the Hurwitz quaternions but there are a lot of decisions here that seem arbitrary to me. What is the motivation for the similar nomenclature? What is special about the order $\mathbb{Z}[\eta][i,j,j']$ in $\Big(\frac{\eta,\eta}{K}\Big)$ and what does it have in common with the Hurwitz quaternions in $\Big(\frac{-1,-1}{\mathbb{Q}}\Big)$?</p>
[ { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" }, { "answer_id": 2108078, "score": 3, "text": "<p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\\mathbb H$. Go figure.</p>\n" }, { "answer_id": 2109038, "score": 3, "text": "<p>What they have to do is they are both related to things studied by Hurwitz. The \"Hurwitz quaternions\" are an object typically considered in number theory (and as a number theorist, if I hear \"Hurwitz order\" that's what I'll think of), and were directly studied by Hurwitz. </p>\n\n<p>The other \"Hurwitz order\" comes up in geometry because it is related to <a href=\"https://en.wikipedia.org/wiki/Hurwitz_surface\" rel=\"nofollow noreferrer\">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other \"Hurwitz order,\" hence the name, though as far as I know was not actually studied by Hurwitz.</p>\n" } ]
Question: What do the Hurwitz quaternions have to do with the Hurwitz quaternion order? Answer 1: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 2: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 3: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 4: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 5: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 6: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 7: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 8: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 9: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 10: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 11: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 12: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 13: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 14: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 15: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 16: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 17: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 18: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 19: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 20: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 21: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 22: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 23: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 24: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p> Answer 25: <p>It appears that the term does not refer to a generalization of the relationship of Hamilton's quaternions to the Hurwitz quaternions (as one might expect) but rather the term as defined there is just a specific order in a specific quaternion algebra other than $\mathbb H$. Go figure.</p> Answer 26: <p>What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. </p> <p>The other "Hurwitz order" comes up in geometry because it is related to <a href="https://en.wikipedia.org/wiki/Hurwitz_surface" rel="nofollow noreferrer">Hurwitz surfaces</a>, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.</p>
Is it possible that the formula for the $n^{th}$prime number is a polynomial in n?
2,108,275
7
https://math.stackexchange.com/questions/2108275/is-it-possible-that-the-formula-for-the-nthprime-number-is-a-polynomial-in
<p>I know that no pattern has been found yet. And prime numbers are weird, so the formula being a polynomial would be too simple to be true. Has there some proof been given that the expression for $n^{th}$ prime number can't be a polynomial in n?</p> <p>I also found this thing on the internet that $\frac{sin^2\frac{\pi(j-1)!^2}{j}}{sin^2\frac{\pi}{j}}$ is equal to one if and only if j is prime. One thing that I got from simplifying this is that $\frac{(j-1)!^4-1}{j}$ is an integer if and only if j is prime. So, if there's some equation which is only satisfied by integers, then it will also be satisfied by $\frac{(j-1)!^4-1}{j}$ and hence it will also be satisfied by all prime numbers. Is there some equation involving continuous functions which is only satisfied by integers? I couldn't find any equation like that. It should be in terms of some standard continuous functions and shouldn't involve discontinuous functions like the greatest integer function and smallest integer function. And it shouldn't be like n%1 =0 only if n is an integer. That won't help. And also not anything like $sin(n\pi)=0$ only if n is an integer.</p>
[ { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" }, { "answer_id": 2108283, "score": 10, "text": "<p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p>\n\n<p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href=\"https://en.wikipedia.org/wiki/Prime_number_theorem\">prime number theorem</a>.</p>\n\n<hr>\n\n<p>By the way, you don't need to raise to the 4th power in your primality test; <a href=\"https://en.wikipedia.org/wiki/Wilson&#39;s_theorem\">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p>\n" }, { "answer_id": 2108314, "score": 9, "text": "<p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p>\n\n<p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href=\"https://en.wikipedia.org/wiki/Integer-valued_polynomial\">Integer-valued polynomials</a>).</p>\n\n<p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is\ncoprime to the denominators of all coefficients of $x$. \nThen since $(x+p)^k \\equiv x^k \\mod p$ for all nonnegative integers $k$, we get that $P(x+p) \\equiv P(x) \\equiv 0\\mod p$, so $P(x+p)$ can't be prime.</p>\n" }, { "answer_id": 2108309, "score": 6, "text": "<p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p>\n\n<p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\\geq2$. This is key because <a href=\"https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes\" rel=\"nofollow noreferrer\">we know that</a>$$\\sum_p\\frac{1}{p}=+\\infty,$$\nwhile if $\\deg q\\geq2$ then we must have $$\\sum_n\\frac{1}{q(n)}\\leq C\\sum_n\\frac{1}{n^2}&lt;+\\infty$$\nfor some constant $C$.</p>\n" }, { "answer_id": 2108300, "score": 4, "text": "<p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\\lim_{x\\rightarrow\\infty}f'(x)=\\infty$, which means that we have:</p>\n\n<blockquote>\n <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p>\n</blockquote>\n\n<p><a href=\"https://en.wikipedia.org/wiki/Yitang_Zhang#Research\" rel=\"nofollow noreferrer\">But this is known to be false.</a></p>\n\n<hr>\n\n<p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\\ge 3$ contained in the primes. But this contradicts <a href=\"https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem\" rel=\"nofollow noreferrer\">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p>\n\n<p>I do not immediately see a nuke via the third derivative of $f$.</p>\n" } ]
Question: Is it possible that the formula for the $n^{th}$prime number is a polynomial in n? Answer 1: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 2: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 3: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 4: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 5: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 6: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 7: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 8: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 9: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 10: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 11: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 12: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 13: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 14: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 15: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 16: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 17: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 18: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 19: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 20: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 21: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 22: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 23: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 24: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 25: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 26: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 27: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 28: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 29: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 30: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 31: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 32: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 33: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 34: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 35: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 36: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 37: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 38: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 39: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 40: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 41: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 42: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 43: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 44: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 45: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 46: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 47: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 48: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p> Answer 49: <p>No, there cannot be any polynomial in $n$ whose value is always the $n$th prime.</p> <p>Clearly there is no <em>first-degree</em> polynomial with this property. And if the polynomial has degree <em>larger than</em> $1$, then for large $n$ it is going to grow too fast -- such a polynomial would declare <em>too few</em> numbers to be primes, contradicting known facts about how common primes are, in particular the <a href="https://en.wikipedia.org/wiki/Prime_number_theorem">prime number theorem</a>.</p> <hr> <p>By the way, you don't need to raise to the 4th power in your primality test; <a href="https://en.wikipedia.org/wiki/Wilson&#39;s_theorem">Wilson's theorem</a> says that $(n-1)!+1$ is divisible by $n$ exactly if $n$ is prime.</p> Answer 50: <p>In fact there is no non-constant polynomial whose values at the positive integers are all primes.</p> <p>First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. <a href="https://en.wikipedia.org/wiki/Integer-valued_polynomial">Integer-valued polynomials</a>).</p> <p>Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.</p> Answer 51: <p>While Henning Makholm's answer is fantastic, I just feel a little uncomfortable to use so high-tech as the <em>Prime Number Theorem</em>.</p> <p>Indeed, the reason behind the fact that no polynomial can interpolate the sequence of all primes is that there are too much primes. To be precise, we know that the primes are not an arithmetic sequence, so if a polynomial $q$ were to interpolate them it would need to have degree $\geq2$. This is key because <a href="https://en.m.wikipedia.org/wiki/Divergence_of_the_sum_of_the_reciprocals_of_the_primes" rel="nofollow noreferrer">we know that</a>$$\sum_p\frac{1}{p}=+\infty,$$ while if $\deg q\geq2$ then we must have $$\sum_n\frac{1}{q(n)}\leq C\sum_n\frac{1}{n^2}&lt;+\infty$$ for some constant $C$.</p> Answer 52: <p>I can't resist the following nuke: suppose $f(x)$ is a polynomial, with leading term $a_kx^k$, such that the $n$th prime is $f(n)$. Clearly $f$ can't be linear; and now it's not hard to see that $\lim_{x\rightarrow\infty}f'(x)=\infty$, which means that we have:</p> <blockquote> <p>For all $N$, there is some $n$ such that for all $m&gt;n$ we have $f(m+1)-f(m)&gt;N$.</p> </blockquote> <p><a href="https://en.wikipedia.org/wiki/Yitang_Zhang#Research" rel="nofollow noreferrer">But this is known to be false.</a></p> <hr> <p>EDIT: Yet another nuke. It's also true that $f''$ goes to infinity <em>(technically to conclude this we need to show that $f$ must have degree $\ge 3$, but that's not hard)</em>. So let $x$ be such that for all $y&gt;x$, $f''(y)&gt;0$ and $f'(y)&gt;0$; then there are no $z&gt;y, a&gt;0$ such that $$f(z+a)-f(z)=f(z+2a)-f(z+a);$$ that is, there are only finitely many arithmetic sequences of length $\ge 3$ contained in the primes. But this contradicts <a href="https://en.wikipedia.org/wiki/Green%E2%80%93Tao_theorem" rel="nofollow noreferrer">the Green-Tao theorem</a>: for any $n$, an arithmetic progression of length $n+2$ witnesses the fact that there are at least $n$-many length $3$ arithmetic progressions in the primes.</p> <p>I do not immediately see a nuke via the third derivative of $f$.</p>
How to show that $\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)?$
2,108,420
7
https://math.stackexchange.com/questions/2108420/how-to-show-that-int-0-pi12x-cdot-sin3x-over-1-cos2x-mathrm-d
<p>How do we show that?</p> <blockquote> <p>$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)\tag1$$</p> </blockquote> <p>$(1)$ it a bit difficult to start with</p> <p>$$\int_{0}^{\pi}(1+2x)\cdot{\sin(x)[1-\sin^2(x)]\over 1+\cos^2(x)}\mathrm dx\tag2$$</p> <p>Setting $u=\cos(x)$ </p> <p>$du=-\sin(x)dx$</p> <p>$$\int_{-1}^{1}(1+2x)\cdot{(u^2)\over 1+u^2}\mathrm du\tag3$$</p> <p>$$\int_{-1}^{1}(1+2\arccos(u))\cdot{(u^2)\over 1+u^2}\mathrm du\tag4$$</p> <p>$du=\sec^2(v)dv$</p> <p>$$\int_{-\pi/4}^{\pi/4}(1+2\arccos(\tan(v)))\tan^2(v)\mathrm dv\tag5$$</p> <p>$$\int_{-\pi/4}^{\pi/4}\tan^2(v)+2\tan^2(v)\arccos(\tan(v))\mathrm dv=I_1+I_2\tag6$$</p> <p>$$I_1=\int_{-\pi/4}^{\pi/4}\tan^2(v)\mathrm dv=2-{\pi\over2}\tag7$$</p> <p>As for $I_2$ I am sure how to do it.</p>
[ { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" }, { "answer_id": 2108466, "score": 9, "text": "<p>Using <a href=\"https://math.stackexchange.com/q/159381/97236\">the fact that</a>\n$$\n\\int_0^\\pi xf(\\sin x)\\,dx=\\frac\\pi2\\int_0^\\pi f(\\sin x)\\,dx\n$$\nand that $\\cos^2=1-\\sin^2x$ (so that this applies), you get that your integral equals\n$$\n(1+\\pi)\\int_0^\\pi \\frac{\\sin^3 x}{1+\\cos^2x}\\,dx\n$$\nWriting\n$$\n\\frac{\\sin^3 x}{1+\\cos^2x}=\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}=\\frac{2}{1+\\cos^2x}\\sin x-\\sin x\n$$\nwe find that your integral equals\n$$\n(1+\\pi)\\bigl[-2\\arctan(\\cos x)+\\cos x\\bigr]_0^\\pi=(1+\\pi)(\\pi-2).\n$$</p>\n" }, { "answer_id": 2108433, "score": 5, "text": "<p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\\sin^3 x/(1+\\cos^2 x)$ is given by $v=\\cos x -2 \\arctan \\cos x$. Thus, we can integrate the problem by parts and obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi -2 \\underbrace{\\int_0^\\pi(\\cos x -2 \\arctan \\cos x)dx}_{=I_2}.$$</p>\n\n<p>Now you can observe that $v=\\cos x -2 \\arctan \\cos x$ is antisymmetric around $x=\\pi/2$ (because $\\cos$ is antisymmetric around $x=\\pi/2$ and $\\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain\n$$I = (\\cos x -2 \\arctan \\cos x)(1+2x)\\Big|_{x=0}^\\pi= (\\pi+1)(\\pi-2).$$</p>\n" }, { "answer_id": 2108479, "score": 3, "text": "<p>$J=\\displaystyle \\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$</p>\n\n<p>Perform the change of variable $y=\\pi-x$,</p>\n\n<p>$\\displaystyle J=\\int_0^{\\pi} (1+2(\\pi-x))\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx$</p>\n\n<p>Therefore,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^3 x}{1+\\cos^2 x}dx\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{\\sin^2 x}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n2J&amp;=-(2+2\\pi)\\int_0^{\\pi}\\dfrac{(1-\\cos^2 x)}{1+\\cos^2 x}\\text{d}(\\cos x)\\\\\n\\end{align}$</p>\n\n<p>Perform the change of variable $y=\\cos x$ in the latter integral,</p>\n\n<p>$\\begin{align}\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1-x^2}{1+x^2}dx\\\\\n\\displaystyle 2J&amp;=(2+2\\pi)\\int_{-1}^{1}\\dfrac{1}{1+x^2}dx-(2+2\\pi)\\int_{-1}^{1}\\dfrac{x^2}{1+x^2}dx\\\\\n&amp;=(2+2\\pi)\\Big[\\arctan x\\Big]_{-1}^{1}-(2+2\\pi)\\left(\\int_{-1}^1 \\dfrac{1+x^2}{1+x^2}dx-\\int_{-1}^1 \\dfrac{1}{1+x^2}dx\\right)\\\\\n&amp;=4(2+2\\pi)\\dfrac{\\pi}{4}-2(2+2\\pi)\\\\\n&amp;=2(\\pi+1)(\\pi-2)\n\\end{align}$</p>\n\n<p>Therefore,</p>\n\n<p>$\\boxed{J=(\\pi+1)(\\pi-2)}$</p>\n" }, { "answer_id": 2108475, "score": 1, "text": "<p>HINT:</p>\n\n<p>Use $\\displaystyle I=\\int_a^bf(x)\\ dx=\\int_a^bf(a+b-x)\\ dx$</p>\n\n<p>so that $\\displaystyle I+I=\\int_a^b[f(x)+f(a+b-x)]dx$</p>\n\n<p>Now use $\\sin^3x\\ dx=-(1-\\cos^2x)d(\\cos x)$</p>\n\n<p>so replace $\\cos x=u$</p>\n" }, { "answer_id": 2108445, "score": 0, "text": "<p>I am not sure if it is the correct approach,<span class=\"math-container\">$$\\int_{0}^{\\pi}(1+2x)\\cdot{\\sin^3(x)\\over 1+\\cos^2(x)}\\mathrm dx$$</span>\nIntegrating by parts, taking <span class=\"math-container\">$(1+2x)$</span> as first function and then we have:</p>\n<p><span class=\"math-container\">$$(1+2x)\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx-2\\int_{0}^{\\pi}\\left(\\int_{0}^{v}\\frac{\\sin^3x}{1+\\cos^2x}dx\\right)dv$$</span></p>\n<p>then for <span class=\"math-container\">$\\int_{0}^{\\pi}\\frac{\\sin^3x}{1+\\cos^2x}dx$</span>\nsimplify it as:\n<span class=\"math-container\">$$\\int_{0}^{\\pi}\\frac{(1-\\cos^2x)\\sin x}{1+\\cos^2x}dx$$</span>\nsubstitute <span class=\"math-container\">$\\cos x = t$</span>\nand there after it is fairly simple to calculate by some method of integration.</p>\n" } ]
Question: How to show that $\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx=(\pi+1)(\pi-2)?$ Answer 1: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 2: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 3: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 4: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 5: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 6: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 7: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 8: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 9: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 10: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 11: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 12: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 13: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 14: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 15: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 16: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 17: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 18: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 19: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 20: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 21: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 22: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 23: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 24: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 25: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 26: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 27: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 28: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 29: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 30: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 31: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 32: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 33: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 34: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 35: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 36: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 37: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 38: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 39: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 40: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 41: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 42: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 43: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 44: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 45: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 46: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 47: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 48: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 49: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 50: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 51: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 52: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 53: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 54: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 55: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 56: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 57: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 58: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 59: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 60: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p> Answer 61: <p>Using <a href="https://math.stackexchange.com/q/159381/97236">the fact that</a> $$ \int_0^\pi xf(\sin x)\,dx=\frac\pi2\int_0^\pi f(\sin x)\,dx $$ and that $\cos^2=1-\sin^2x$ (so that this applies), you get that your integral equals $$ (1+\pi)\int_0^\pi \frac{\sin^3 x}{1+\cos^2x}\,dx $$ Writing $$ \frac{\sin^3 x}{1+\cos^2x}=\frac{(1-\cos^2x)\sin x}{1+\cos^2x}=\frac{2}{1+\cos^2x}\sin x-\sin x $$ we find that your integral equals $$ (1+\pi)\bigl[-2\arctan(\cos x)+\cos x\bigr]_0^\pi=(1+\pi)(\pi-2). $$</p> Answer 62: <p>I am not sure this is the easiest (but it is a viable approach). Note that the antiderivative of $v'=\sin^3 x/(1+\cos^2 x)$ is given by $v=\cos x -2 \arctan \cos x$. Thus, we can integrate the problem by parts and obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi -2 \underbrace{\int_0^\pi(\cos x -2 \arctan \cos x)dx}_{=I_2}.$$</p> <p>Now you can observe that $v=\cos x -2 \arctan \cos x$ is antisymmetric around $x=\pi/2$ (because $\cos$ is antisymmetric around $x=\pi/2$ and $\arctan$ is antisymmetric around $x=0$). Thus, the remaining integral vanishes $(I_2=0)$ and we obtain $$I = (\cos x -2 \arctan \cos x)(1+2x)\Big|_{x=0}^\pi= (\pi+1)(\pi-2).$$</p> Answer 63: <p>$J=\displaystyle \int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$</p> <p>Perform the change of variable $y=\pi-x$,</p> <p>$\displaystyle J=\int_0^{\pi} (1+2(\pi-x))\dfrac{\sin^3 x}{1+\cos^2 x}dx$</p> <p>Therefore,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_0^{\pi}\dfrac{\sin^3 x}{1+\cos^2 x}dx\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{\sin^2 x}{1+\cos^2 x}\text{d}(\cos x)\\ 2J&amp;=-(2+2\pi)\int_0^{\pi}\dfrac{(1-\cos^2 x)}{1+\cos^2 x}\text{d}(\cos x)\\ \end{align}$</p> <p>Perform the change of variable $y=\cos x$ in the latter integral,</p> <p>$\begin{align}\displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1-x^2}{1+x^2}dx\\ \displaystyle 2J&amp;=(2+2\pi)\int_{-1}^{1}\dfrac{1}{1+x^2}dx-(2+2\pi)\int_{-1}^{1}\dfrac{x^2}{1+x^2}dx\\ &amp;=(2+2\pi)\Big[\arctan x\Big]_{-1}^{1}-(2+2\pi)\left(\int_{-1}^1 \dfrac{1+x^2}{1+x^2}dx-\int_{-1}^1 \dfrac{1}{1+x^2}dx\right)\\ &amp;=4(2+2\pi)\dfrac{\pi}{4}-2(2+2\pi)\\ &amp;=2(\pi+1)(\pi-2) \end{align}$</p> <p>Therefore,</p> <p>$\boxed{J=(\pi+1)(\pi-2)}$</p> Answer 64: <p>HINT:</p> <p>Use $\displaystyle I=\int_a^bf(x)\ dx=\int_a^bf(a+b-x)\ dx$</p> <p>so that $\displaystyle I+I=\int_a^b[f(x)+f(a+b-x)]dx$</p> <p>Now use $\sin^3x\ dx=-(1-\cos^2x)d(\cos x)$</p> <p>so replace $\cos x=u$</p> Answer 65: <p>I am not sure if it is the correct approach,<span class="math-container">$$\int_{0}^{\pi}(1+2x)\cdot{\sin^3(x)\over 1+\cos^2(x)}\mathrm dx$$</span> Integrating by parts, taking <span class="math-container">$(1+2x)$</span> as first function and then we have:</p> <p><span class="math-container">$$(1+2x)\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx-2\int_{0}^{\pi}\left(\int_{0}^{v}\frac{\sin^3x}{1+\cos^2x}dx\right)dv$$</span></p> <p>then for <span class="math-container">$\int_{0}^{\pi}\frac{\sin^3x}{1+\cos^2x}dx$</span> simplify it as: <span class="math-container">$$\int_{0}^{\pi}\frac{(1-\cos^2x)\sin x}{1+\cos^2x}dx$$</span> substitute <span class="math-container">$\cos x = t$</span> and there after it is fairly simple to calculate by some method of integration.</p>
Proof of the radical expression of $\cos\dfrac {2\pi}{17}$
2,109,390
7
https://math.stackexchange.com/questions/2109390/proof-of-the-radical-expression-of-cos-dfrac-2-pi17
<blockquote> <p><strong>Question:</strong> How would you prove the equation$$\small\cos\dfrac {2\pi}{17}=\dfrac {-1+\sqrt{17}+\sqrt{34-2\sqrt{17}}+2\sqrt{17+3\sqrt{17}-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}}}{16}\tag1$$</p> </blockquote> <p>I'm not too sure how to prove it and I'm not sure where to begin. I started with $\exp(2\pi i)=\cos(2\pi)+i\sin(2\pi)=1\implies \cos(2k\pi)+i\sin(2k\pi)=\exp(2k\pi)$. But I'm not sure what to do next.</p>
[ { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" }, { "answer_id": 2109558, "score": 3, "text": "<p>The more natural item is $$ 2 \\cos \\left( \\frac{2 \\pi}{17} \\right), $$ which is one root of \n$$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$\nfrom page 18 of <a href=\"https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false\" rel=\"nofollow noreferrer\">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\\omega \\neq 1$ but $\\omega^{17} = 1,$ it is not hard to show that $\\omega + \\frac{1}{\\omega}$ is a root of the given polynomial, using\n$$ \\small 1 + \\omega + \\omega^2 + \\omega^3 + \\omega^4 + \\omega^5 + \\omega^6 + \\omega^7 + \\omega^8 + \\omega^9 + \\omega^{10} + \\omega^{11} + \\omega^{12} + \\omega^{13} + \\omega^{14} + \\omega^{15} + \\omega^{16} =0 $$</p>\n\n<p><a href=\"https://i.stack.imgur.com/iGPye.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/iGPye.jpg\" alt=\"enter image description here\"></a></p>\n\n<p><a href=\"https://i.stack.imgur.com/HBV6z.jpg\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/HBV6z.jpg\" alt=\"enter image description here\"></a></p>\n" }, { "answer_id": 4726027, "score": 3, "text": "<p>I will show this by using Galois-theory. I will be using some properties of the fields <span class=\"math-container\">$\\mathbf{Q}(\\zeta_p)$</span> with <span class=\"math-container\">$p$</span> prime, which you can find <a href=\"https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf\" rel=\"nofollow noreferrer\">here</a> on page 51.</p>\n<p>I start by looking at the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\zeta_{17})$</span>, where I define <span class=\"math-container\">$\\zeta:=\\zeta_{17}:= \\text{exp}( \\frac{2 \\pi i}{17}) = \\cos\\left(\\frac{2\\pi}{17}\\right)+i\\sin\\left(\\frac{2\\pi}{17}\\right)$</span>. The galois-group <span class=\"math-container\">$\\text{Gal}(\\mathbf{Q}(\\zeta)/\\mathbf{Q})$</span> is isomorphic to <span class=\"math-container\">$\\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class=\"math-container\">$16$</span>, which is a power of two, therefore <span class=\"math-container\">$\\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class=\"math-container\">$\\zeta$</span> using square roots (square roots of square roots ..).</p>\n<p>Consider the Gauss periods:\n<span class=\"math-container\">\\begin{align}\n\\eta_2&amp;:=\\zeta +\\zeta^{-1} +\\zeta^2 +\\zeta^{-2} + \\zeta^4 + \\zeta^{-4} + \\zeta^8 +\\zeta^{-8} \\\\\n\\eta_4&amp;:= \\zeta +\\zeta^{-1} + \\zeta^4 + \\zeta^{-4}\\\\\n\\eta_8&amp;:= \\zeta^1 + \\zeta^{-1}.\n\\end{align}</span>\nNotice furthermore <span class=\"math-container\">$\\zeta^{-1}$</span> is the complex conjugated of <span class=\"math-container\">$\\zeta$</span>, thus <span class=\"math-container\">$\\eta_8 := \\zeta+\\zeta^{-1} = 2\\cos\\left(\\frac{2\\pi}{17}\\right)$</span>. Therefore, I will express <span class=\"math-container\">$\\eta_8$</span> using radicals.</p>\n<p>Let's look at the tower of fields: <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8) \\subset \\mathbf{Q}(\\zeta)$</span>. Note that <span class=\"math-container\">$\\mathbf{Q}(\\eta_8)$</span> corresponds to the unique subgroup <span class=\"math-container\">$H_8:=\\langle -1 \\rangle \\subset \\mathbf{F}_{17}^*$</span> of index <span class=\"math-container\">$8$</span>. Where <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> resp. <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> correspond to the unique subgroups <span class=\"math-container\">$H_4 := \\langle 4 \\rangle$</span> resp. <span class=\"math-container\">$H_2:=\\langle -8 \\rangle$</span> of index <span class=\"math-container\">$4$</span> resp. <span class=\"math-container\">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class=\"math-container\">$\\mathbf{F}_{17}^* \\supset H_2 \\supset H_4 \\supset H_8 \\supset \\{1\\}$</span>.</p>\n<p>What I will do now is express <span class=\"math-container\">$\\eta_4$</span> in terms of <span class=\"math-container\">$\\eta_2$</span> (and square roots of it), and <span class=\"math-container\">$\\eta_8$</span> in terms of <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class=\"math-container\">$\\eta_j$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_k) \\subset \\mathbf{Q}(\\eta_j)$</span> and then using the abc-formula for quadratics.</p>\n<p>The minimum polynomial of <span class=\"math-container\">$\\eta_2$</span> over <span class=\"math-container\">$\\mathbf{Q}$</span> denoted by <span class=\"math-container\">$f^{\\eta_2}_{\\mathbf{Q}}$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_2}_{\\mathbf{Q}} = (x-\\eta_2)\\cdot \\left(x-(\\zeta^3+\\zeta^{-3}+\\zeta^5+\\zeta^{-5}+\\zeta^6+\\zeta^{-6}+\\zeta^7+\\zeta^{-7} \\right)=x^2+x-4.\n\\end{align}</span>\nThis fact follows from the fact that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_2 = \\{ H_2, 3H_2\\}$</span> (<span class=\"math-container\">$\\eta_2$</span> has one different conjugate). I can see that <span class=\"math-container\">$\\eta_2 = \\frac{1}{2}(-1\\pm \\sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class=\"math-container\">$\\sum_{k \\in H_2}\\cos(2k\\frac{\\pi}{17})$</span> is positive), by using the definition of <span class=\"math-container\">$\\eta_2$</span>, that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_2 = \\frac{1}{2}(-1+ \\sqrt{17}). \n\\end{align}</span></p>\n<p>I turn to <span class=\"math-container\">$\\eta_4$</span>, where I have that <span class=\"math-container\">$\\mathbf{F}_{17}^* / H_4 = \\{ H_4,3H_4,6H_4,8H_4\\}$</span>, which corresponds to the extension <span class=\"math-container\">$\\mathbf{Q} \\subset \\mathbf{Q}(\\eta_4)$</span>. However, I restrict to <span class=\"math-container\">$\\mathbf{Q}(\\eta_2) \\subset \\mathbf{Q}(\\eta_4)$</span> which corresponds to <span class=\"math-container\">$H_2/H_4 = \\{ H_4,8H_4\\}$</span> (<span class=\"math-container\">$\\eta_4$</span> has one other conjugate over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span>). Therfore the minimum polynomial <span class=\"math-container\">$f^{\\eta_4}_{\\mathbf{Q}(\\eta_2)}$</span> of <span class=\"math-container\">$\\eta_4$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_2)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_4}_{\\mathbf{Q}(\\eta_2)} &amp;= (x-\\eta_4)\\cdot \\left(x- (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8} )\\right) \\\\\n&amp;= x^2 -(\\eta_4 +\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})x +\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8})\\\\\n&amp;= x^2-\\eta_2x-1.\n\\end{align}</span>\nWhere you can verify that <span class=\"math-container\">$\\eta_4 \\cdot (\\zeta^2+\\zeta^{-2}+\\zeta^8+\\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class=\"math-container\">$\\zeta$</span> is <span class=\"math-container\">$\\Phi_{17} = \\sum^{16}_{k=0}x^k$</span> ( in other words <span class=\"math-container\">$\\sum^{16}_{k=0}\\zeta^k =0$</span>). Now using the abc-formula you can see that:\n<span class=\"math-container\">$\\eta_4 = \\frac{1}{2}(\\eta_2 \\pm \\sqrt{\\eta_2^2+4})$</span>. Where you can check again that in fact:\n<span class=\"math-container\">\\begin{align}\n\\eta_4 = \\frac{1}{2}(\\eta_2 + \\sqrt{\\eta_2^2+4}),\n\\end{align}</span> by simply noting that <span class=\"math-container\">$\\sum_{k \\in H_4} \\cos(2k\\frac{\\pi}{17})$</span> is positive.</p>\n<p>Last but not least, I consider <span class=\"math-container\">$\\eta_8$</span>. I'm interested in the extension <span class=\"math-container\">$\\mathbf{Q}(\\eta_4) \\subset \\mathbf{Q}(\\eta_8)$</span> with corresponding group <span class=\"math-container\">$H_4/H_8 = \\{ H_8,4H_8\\}$</span>. So, the minimum polynomial of <span class=\"math-container\">$\\eta_8$</span> over <span class=\"math-container\">$\\mathbf{Q}(\\eta_4)$</span> is given by:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} &amp;= (x-\\eta_8) \\cdot \\left(x- (\\zeta^4+\\zeta^{-4})\\right) \\\\\n&amp;= x^2-(\\eta_8+\\zeta^4+\\zeta^{-4})x+\\eta_8(\\zeta^4+\\zeta^{-4})\\\\\n&amp;= x^2-\\eta_4x+\\beta.\n\\end{align}</span>\nNow <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = (\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4})$</span> can be expressed using <span class=\"math-container\">$\\eta_4$</span> and <span class=\"math-container\">$\\eta_2$</span> by noticing that:\n<span class=\"math-container\">\\begin{align}\n\\eta_4^2 &amp;:= (\\zeta+\\zeta^{-1} + \\zeta^4+\\zeta^{-4})^2 = (\\zeta+\\zeta^{-1})^2 + 2(\\zeta+\\zeta^{-1})(\\zeta^4+\\zeta^{-4}) + (\\zeta^4+\\zeta^{-4})^2\\\\\n&amp;= \\zeta^2+\\zeta^{-2} + \\zeta^8+\\zeta^{-8} + 4 + 2 \\beta \\\\\n&amp;= \\eta_2 -\\eta_4 + 4 + 2\\beta.\n\\end{align}</span>\nThus <span class=\"math-container\">$\\beta := \\eta_8(\\zeta^4+\\zeta^{-4}) = \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4)$</span>. So I can see that:\n<span class=\"math-container\">\\begin{align}\nf^{\\eta_8}_{\\mathbf{Q}(\\eta_4)} = x^2 - \\eta_4x + \\frac{1}{2} (\\eta_4^2+\\eta_4-\\eta_2-4). \n\\end{align}</span>\nUsing the abc-formula (and checking whether it has to plus or minus) I find that:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 &amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{\\eta_4^2-2(\\eta_4^2+\\eta_4-\\eta_2-4)}\\right)\\\\\n&amp;= \\frac{1}{2} \\left(\\eta_4 + \\sqrt{-\\eta_4^2-2(\\eta_4-\\eta_2-4)}\\right).\n\\end{align}</span>\nWorking everything out (not very long) you end up with:\n<span class=\"math-container\">\\begin{align}\n\\eta_8 = \\frac{1}{8} (-1 + \\sqrt{17}) + \\frac{1}{8} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{4} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNow I conclude that:\n<span class=\"math-container\">\\begin{align}\n\\cos\\left({\\frac{2\\pi}{17}}\\right) = \\frac{1}{2}\\eta_8 = \\frac{1}{16} (-1 + \\sqrt{17}) + \\frac{1}{16} \\sqrt{34-2\\sqrt{17}} + \\frac{1}{8} \\sqrt{17+3\\sqrt{17} - \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}}.\n\\end{align}</span>\nNote that in the big square root the term <span class=\"math-container\">$ \\frac{1}{2}(3+\\sqrt{17})\\sqrt{34-2\\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class=\"math-container\">$-\\sqrt{34-2\\sqrt{17}}-2\\sqrt{34+2\\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class=\"math-container\">$170+38\\sqrt{17}$</span>, thus both are roots of <span class=\"math-container\">$170+38\\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p>\n<p>You can try the same process with <span class=\"math-container\">$\\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>\n" } ]
Question: Proof of the radical expression of $\cos\dfrac {2\pi}{17}$ Answer 1: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 2: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 3: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 4: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 5: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 6: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 7: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 8: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 9: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 10: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 11: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 12: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 13: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 14: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 15: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 16: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 17: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 18: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 19: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 20: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 21: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 22: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 23: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 24: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p> Answer 25: <p>The more natural item is $$ 2 \cos \left( \frac{2 \pi}{17} \right), $$ which is one root of $$ x^8 + x^7 - 7 x^6 - 6 x^5 + 15 x^4 + 10 x^3 - 10 x^2 - 4 x + 1, $$ from page 18 of <a href="https://books.google.com/books?id=wt7lgfeYqMQC&amp;pg=PR1&amp;lpg=PR1&amp;dq=reuschle%20%20tafeln%20complexer%20primzahlen&amp;source=bl&amp;ots=VGZFPrfUBn&amp;sig=MlQ667PqXaQ9rAvLWkG3_F1rwsk&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiIwtSvm9TQAhUJ-2MKHXJIA_kQ6AEIODAE#v=onepage&amp;q=reuschle%20%20tafeln%20complexer%20primzahlen&amp;f=false" rel="nofollow noreferrer">Reuschle (1875)</a> using a method due to Gauss, having to do with cyclotomy, and predating Galois. If $\omega \neq 1$ but $\omega^{17} = 1,$ it is not hard to show that $\omega + \frac{1}{\omega}$ is a root of the given polynomial, using $$ \small 1 + \omega + \omega^2 + \omega^3 + \omega^4 + \omega^5 + \omega^6 + \omega^7 + \omega^8 + \omega^9 + \omega^{10} + \omega^{11} + \omega^{12} + \omega^{13} + \omega^{14} + \omega^{15} + \omega^{16} =0 $$</p> <p><a href="https://i.stack.imgur.com/iGPye.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iGPye.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/HBV6z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HBV6z.jpg" alt="enter image description here"></a></p> Answer 26: <p>I will show this by using Galois-theory. I will be using some properties of the fields <span class="math-container">$\mathbf{Q}(\zeta_p)$</span> with <span class="math-container">$p$</span> prime, which you can find <a href="https://websites.math.leidenuniv.nl/algebra/algebra3eng.pdf" rel="nofollow noreferrer">here</a> on page 51.</p> <p>I start by looking at the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\zeta_{17})$</span>, where I define <span class="math-container">$\zeta:=\zeta_{17}:= \text{exp}( \frac{2 \pi i}{17}) = \cos\left(\frac{2\pi}{17}\right)+i\sin\left(\frac{2\pi}{17}\right)$</span>. The galois-group <span class="math-container">$\text{Gal}(\mathbf{Q}(\zeta)/\mathbf{Q})$</span> is isomorphic to <span class="math-container">$\mathbf{F}_{17}^*$</span>. Note that the order of this group is <span class="math-container">$16$</span>, which is a power of two, therefore <span class="math-container">$\zeta$</span> is constructible using ruler and compass. In other words, it is possible to express <span class="math-container">$\zeta$</span> using square roots (square roots of square roots ..).</p> <p>Consider the Gauss periods: <span class="math-container">\begin{align} \eta_2&amp;:=\zeta +\zeta^{-1} +\zeta^2 +\zeta^{-2} + \zeta^4 + \zeta^{-4} + \zeta^8 +\zeta^{-8} \\ \eta_4&amp;:= \zeta +\zeta^{-1} + \zeta^4 + \zeta^{-4}\\ \eta_8&amp;:= \zeta^1 + \zeta^{-1}. \end{align}</span> Notice furthermore <span class="math-container">$\zeta^{-1}$</span> is the complex conjugated of <span class="math-container">$\zeta$</span>, thus <span class="math-container">$\eta_8 := \zeta+\zeta^{-1} = 2\cos\left(\frac{2\pi}{17}\right)$</span>. Therefore, I will express <span class="math-container">$\eta_8$</span> using radicals.</p> <p>Let's look at the tower of fields: <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8) \subset \mathbf{Q}(\zeta)$</span>. Note that <span class="math-container">$\mathbf{Q}(\eta_8)$</span> corresponds to the unique subgroup <span class="math-container">$H_8:=\langle -1 \rangle \subset \mathbf{F}_{17}^*$</span> of index <span class="math-container">$8$</span>. Where <span class="math-container">$\mathbf{Q}(\eta_4)$</span> resp. <span class="math-container">$\mathbf{Q}(\eta_2)$</span> correspond to the unique subgroups <span class="math-container">$H_4 := \langle 4 \rangle$</span> resp. <span class="math-container">$H_2:=\langle -8 \rangle$</span> of index <span class="math-container">$4$</span> resp. <span class="math-container">$2$</span>. So, each step in the tower is quadratic, and we have the corresponding chain of subgroups: <span class="math-container">$\mathbf{F}_{17}^* \supset H_2 \supset H_4 \supset H_8 \supset \{1\}$</span>.</p> <p>What I will do now is express <span class="math-container">$\eta_4$</span> in terms of <span class="math-container">$\eta_2$</span> (and square roots of it), and <span class="math-container">$\eta_8$</span> in terms of <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> (and square roots of them), by determining minimum polynomials of each <span class="math-container">$\eta_j$</span> over <span class="math-container">$\mathbf{Q}(\eta_k) \subset \mathbf{Q}(\eta_j)$</span> and then using the abc-formula for quadratics.</p> <p>The minimum polynomial of <span class="math-container">$\eta_2$</span> over <span class="math-container">$\mathbf{Q}$</span> denoted by <span class="math-container">$f^{\eta_2}_{\mathbf{Q}}$</span> is given by: <span class="math-container">\begin{align} f^{\eta_2}_{\mathbf{Q}} = (x-\eta_2)\cdot \left(x-(\zeta^3+\zeta^{-3}+\zeta^5+\zeta^{-5}+\zeta^6+\zeta^{-6}+\zeta^7+\zeta^{-7} \right)=x^2+x-4. \end{align}</span> This fact follows from the fact that <span class="math-container">$\mathbf{F}_{17}^* / H_2 = \{ H_2, 3H_2\}$</span> (<span class="math-container">$\eta_2$</span> has one different conjugate). I can see that <span class="math-container">$\eta_2 = \frac{1}{2}(-1\pm \sqrt{17})$</span> using the abc-formula. Note that you can verify with a calculator (check that the sum<span class="math-container">$\sum_{k \in H_2}\cos(2k\frac{\pi}{17})$</span> is positive), by using the definition of <span class="math-container">$\eta_2$</span>, that in fact: <span class="math-container">\begin{align} \eta_2 = \frac{1}{2}(-1+ \sqrt{17}). \end{align}</span></p> <p>I turn to <span class="math-container">$\eta_4$</span>, where I have that <span class="math-container">$\mathbf{F}_{17}^* / H_4 = \{ H_4,3H_4,6H_4,8H_4\}$</span>, which corresponds to the extension <span class="math-container">$\mathbf{Q} \subset \mathbf{Q}(\eta_4)$</span>. However, I restrict to <span class="math-container">$\mathbf{Q}(\eta_2) \subset \mathbf{Q}(\eta_4)$</span> which corresponds to <span class="math-container">$H_2/H_4 = \{ H_4,8H_4\}$</span> (<span class="math-container">$\eta_4$</span> has one other conjugate over <span class="math-container">$\mathbf{Q}(\eta_2)$</span>). Therfore the minimum polynomial <span class="math-container">$f^{\eta_4}_{\mathbf{Q}(\eta_2)}$</span> of <span class="math-container">$\eta_4$</span> over <span class="math-container">$\mathbf{Q}(\eta_2)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_4}_{\mathbf{Q}(\eta_2)} &amp;= (x-\eta_4)\cdot \left(x- (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8} )\right) \\ &amp;= x^2 -(\eta_4 +\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})x +\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8})\\ &amp;= x^2-\eta_2x-1. \end{align}</span> Where you can verify that <span class="math-container">$\eta_4 \cdot (\zeta^2+\zeta^{-2}+\zeta^8+\zeta^{-8}) = -1$</span> using that the minimum polynomial of <span class="math-container">$\zeta$</span> is <span class="math-container">$\Phi_{17} = \sum^{16}_{k=0}x^k$</span> ( in other words <span class="math-container">$\sum^{16}_{k=0}\zeta^k =0$</span>). Now using the abc-formula you can see that: <span class="math-container">$\eta_4 = \frac{1}{2}(\eta_2 \pm \sqrt{\eta_2^2+4})$</span>. Where you can check again that in fact: <span class="math-container">\begin{align} \eta_4 = \frac{1}{2}(\eta_2 + \sqrt{\eta_2^2+4}), \end{align}</span> by simply noting that <span class="math-container">$\sum_{k \in H_4} \cos(2k\frac{\pi}{17})$</span> is positive.</p> <p>Last but not least, I consider <span class="math-container">$\eta_8$</span>. I'm interested in the extension <span class="math-container">$\mathbf{Q}(\eta_4) \subset \mathbf{Q}(\eta_8)$</span> with corresponding group <span class="math-container">$H_4/H_8 = \{ H_8,4H_8\}$</span>. So, the minimum polynomial of <span class="math-container">$\eta_8$</span> over <span class="math-container">$\mathbf{Q}(\eta_4)$</span> is given by: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} &amp;= (x-\eta_8) \cdot \left(x- (\zeta^4+\zeta^{-4})\right) \\ &amp;= x^2-(\eta_8+\zeta^4+\zeta^{-4})x+\eta_8(\zeta^4+\zeta^{-4})\\ &amp;= x^2-\eta_4x+\beta. \end{align}</span> Now <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = (\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4})$</span> can be expressed using <span class="math-container">$\eta_4$</span> and <span class="math-container">$\eta_2$</span> by noticing that: <span class="math-container">\begin{align} \eta_4^2 &amp;:= (\zeta+\zeta^{-1} + \zeta^4+\zeta^{-4})^2 = (\zeta+\zeta^{-1})^2 + 2(\zeta+\zeta^{-1})(\zeta^4+\zeta^{-4}) + (\zeta^4+\zeta^{-4})^2\\ &amp;= \zeta^2+\zeta^{-2} + \zeta^8+\zeta^{-8} + 4 + 2 \beta \\ &amp;= \eta_2 -\eta_4 + 4 + 2\beta. \end{align}</span> Thus <span class="math-container">$\beta := \eta_8(\zeta^4+\zeta^{-4}) = \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4)$</span>. So I can see that: <span class="math-container">\begin{align} f^{\eta_8}_{\mathbf{Q}(\eta_4)} = x^2 - \eta_4x + \frac{1}{2} (\eta_4^2+\eta_4-\eta_2-4). \end{align}</span> Using the abc-formula (and checking whether it has to plus or minus) I find that: <span class="math-container">\begin{align} \eta_8 &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{\eta_4^2-2(\eta_4^2+\eta_4-\eta_2-4)}\right)\\ &amp;= \frac{1}{2} \left(\eta_4 + \sqrt{-\eta_4^2-2(\eta_4-\eta_2-4)}\right). \end{align}</span> Working everything out (not very long) you end up with: <span class="math-container">\begin{align} \eta_8 = \frac{1}{8} (-1 + \sqrt{17}) + \frac{1}{8} \sqrt{34-2\sqrt{17}} + \frac{1}{4} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Now I conclude that: <span class="math-container">\begin{align} \cos\left({\frac{2\pi}{17}}\right) = \frac{1}{2}\eta_8 = \frac{1}{16} (-1 + \sqrt{17}) + \frac{1}{16} \sqrt{34-2\sqrt{17}} + \frac{1}{8} \sqrt{17+3\sqrt{17} - \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}}. \end{align}</span> Note that in the big square root the term <span class="math-container">$ \frac{1}{2}(3+\sqrt{17})\sqrt{34-2\sqrt{17}}$</span> can be seen, while Gauss actually gave the term <span class="math-container">$-\sqrt{34-2\sqrt{17}}-2\sqrt{34+2\sqrt{17}}$</span>. You can show by squaring both expressions that you get <span class="math-container">$170+38\sqrt{17}$</span>, thus both are roots of <span class="math-container">$170+38\sqrt{17}$</span>, and both are negative, therefore they represent the same number.</p> <p>You can try the same process with <span class="math-container">$\zeta_{257}$</span> (which will take 7 steps, this took 3 steps).</p>
Derivative of arcsin
2,109,428
7
https://math.stackexchange.com/questions/2109428/derivative-of-arcsin
<p>In my assignment I need to analyze the function </p> <p>$f(x)=\arcsin \frac{1-x^2}{1+x^2}$</p> <p>And so I need to do the first derivative and my result is: </p> <p>$-\dfrac{4x}{\left(x^2+1\right)^2\sqrt{1-\frac{\left(1-x^2\right)^2}{\left(x^2+1\right)^2}}}$</p> <p>But in the solution of this assignment it says </p> <p>$f'(x)=-\frac{2x}{|x|(1+x^2)}$</p> <p>I don't understand how they get this. I checked my answer on online calculator and it is the same.</p>
[ { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" }, { "answer_id": 2109441, "score": 9, "text": "<p>You should have developed your result a bit more to obtain the assignment solution.</p>\n\n<p>\\begin{align}\n-\\frac{4x}{(x^2+1)^2\\sqrt{1-\\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\sqrt{\\frac{4x^2}{(x^2+1)^2}}}\\\\\n&amp;=-\\frac{4x}{(x^2+1)^2\\frac{2|x|}{(x^2+1)}}\\\\\n&amp;=-\\frac{2x}{|x|(x^2+1) }\n\\end{align}\nVoila!</p>\n" }, { "answer_id": 2109439, "score": 7, "text": "<p><em>Divide et impera</em>.</p>\n\n<p>Under the square root you have\n$$\n1-\\frac{(1-x^2)^2}{(1+x^2)^2}=\n\\frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\\frac{4x^2}{(1+x^2)^2}\n$$\nso the big square root is actually\n$$\n\\frac{2|x|}{1+x^2}\n$$\nThus your formula becomes\n$$\n-\\frac{4x}{(1+x^2)^2}\\frac{1+x^2}{2|x|}=-\\frac{2x}{|x|(1+x^2)}\n$$</p>\n" }, { "answer_id": 2109450, "score": 2, "text": "<p>Use that $(1+x^2)^2\\sqrt{1-\\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{(1+x^2)^2-\\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\\sqrt{2\\cdot2x^2}$</p>\n" } ]
Question: Derivative of arcsin Answer 1: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 2: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 3: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 4: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 5: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 6: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 7: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 8: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 9: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 10: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 11: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 12: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 13: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 14: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 15: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 16: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 17: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 18: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 19: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 20: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 21: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 22: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 23: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 24: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 25: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 26: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 27: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 28: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 29: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 30: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 31: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 32: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 33: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 34: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 35: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 36: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p> Answer 37: <p>You should have developed your result a bit more to obtain the assignment solution.</p> <p>\begin{align} -\frac{4x}{(x^2+1)^2\sqrt{1-\frac{(1-x^2)^2}{(x^2+1)^2}}}&amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{x^4+1+2x^2-1-x^4+2x^2}{(x^2+1)^2}}} \\ &amp;=-\frac{4x}{(x^2+1)^2\sqrt{\frac{4x^2}{(x^2+1)^2}}}\\ &amp;=-\frac{4x}{(x^2+1)^2\frac{2|x|}{(x^2+1)}}\\ &amp;=-\frac{2x}{|x|(x^2+1) } \end{align} Voila!</p> Answer 38: <p><em>Divide et impera</em>.</p> <p>Under the square root you have $$ 1-\frac{(1-x^2)^2}{(1+x^2)^2}= \frac{(1+x^2)^2-(1-x^2)^2}{(1+x^2)^2}=\frac{4x^2}{(1+x^2)^2} $$ so the big square root is actually $$ \frac{2|x|}{1+x^2} $$ Thus your formula becomes $$ -\frac{4x}{(1+x^2)^2}\frac{1+x^2}{2|x|}=-\frac{2x}{|x|(1+x^2)} $$</p> Answer 39: <p>Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$</p>
What is the largest number we can get using $n$ ones, addition, multiplication and brackets?
2,109,446
7
https://math.stackexchange.com/questions/2109446/what-is-the-largest-number-we-can-get-using-n-ones-addition-multiplication-a
<p>Let's say we have $n$ ones, i.e. $1,1,\dots,1$ $n$ times and are allowed to add them, multiply and insert brackets wherever we want.</p> <p>What is the largest number we can get for a particular $n$? Is there a closed form or at least an OEIS sequence?</p> <p>For $n=5$ it appears to be $(1+1)(1+1+1)=6$, for $n=6$ it appears to be $(1+1+1)(1+1+1)=9$, for $n=9$ I found $(1+1+1)(1+1+1)(1+1+1)=27$ to be the largest number.</p> <p>But I don't see a way to find a general formula. I guess it would make sense to start from the other end - for each number $N$ find a factorization with the least sum of factors or something like that.</p>
[ { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" }, { "answer_id": 2109468, "score": 10, "text": "<p>Since </p>\n\n<p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p>\n\n<p>and</p>\n\n<p>2) two 1+1+1 terms is better than three 1+1 terms,</p>\n\n<p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p>\n\n<p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p>\n\n<p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p>\n\n<p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p>\n" }, { "answer_id": 2109461, "score": 1, "text": "<p>Don't know if this could be an \"approximate\" solution or anything. But try the following.</p>\n\n<p>If the final expression consists of $x_1 \\times ... \\times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \\ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \\approx 2.7$, which can be as good as $3$ in the case of integers</p>\n" }, { "answer_id": 2109476, "score": 0, "text": "<p>Obviously the problem reduces to finding the maximum of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \\cdot a_2 \\cdot ... \\cdot a_t$ will increase if $a_1 \\ge a_2+2$. Eventually this leads to concluding that for a \"fixed\" $t$ the maximum will occur when $a$'s are as close as possible. </p>\n\n<p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \\cdots = a_t = a$. Then we have $a = \\frac nt$. Now $a_1 \\cdot a_2 \\cdot ... \\cdot a_t = \\left(\\frac nt \\right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \\frac ne$</p>\n\n<p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \\left\\lfloor \\frac ne \\right\\rfloor$ or $\\left\\lfloor \\frac ne \\right\\rfloor + 1$. In each case we choose $a_i = \\left\\lfloor \\frac nt \\right\\rfloor$ or $\\left\\lfloor \\frac nt \\right\\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>\n" } ]
Question: What is the largest number we can get using $n$ ones, addition, multiplication and brackets? Answer 1: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 2: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 3: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 4: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 5: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 6: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 7: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 8: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 9: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 10: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 11: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 12: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 13: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 14: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 15: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 16: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 17: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 18: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 19: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 20: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 21: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 22: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 23: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 24: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 25: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 26: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 27: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 28: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 29: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 30: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 31: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 32: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 33: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 34: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 35: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 36: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p> Answer 37: <p>Since </p> <p>1) it doesn't make sense to create any 1+1+1+... terms with more than 3 1's, since 1+1+1+1+1 is 'worse' than (1+1) times (1+1+1), and for more 1's it will be even 'better' to multiply several 1+1+.. Terms rather than have one long one, and with 4 1s 1+1+1+1 is just as good as (1+1) times (1+1),</p> <p>and</p> <p>2) two 1+1+1 terms is better than three 1+1 terms,</p> <p>your basic strategy is to get as many 1+1+1 terms as possible. So:</p> <p>If n mod 3 = 0, the best you can do is $3^{n/3}$</p> <p>If n mod 3 = 1, get two 1+1 terms, and otherwise 1+1+1 terms, so you get $4*3^{(n-4)/3}$</p> <p>If n mod 3 = 2, get one 1+1 term in addition to 1+1+1 terms, so $2*3^{(n-2)/3}$</p> Answer 38: <p>Don't know if this could be an "approximate" solution or anything. But try the following.</p> <p>If the final expression consists of $x_1 \times ... \times x_k$ where $x_1+...+x_k=n$, with $k&lt;n$, Holding n,k fixed, and suppose the $x$'s need not be integers for the moment, the max is $(n/k)^k$. Taking derivative wrt k and get $(n/k)^k ( \ln(n/k)-1)=0)$ to obtain the best $k$ which is about $n/e$, with optimally each term being approx $x \approx 2.7$, which can be as good as $3$ in the case of integers</p> Answer 39: <p>Obviously the problem reduces to finding the maximum of $a_1 \cdot a_2 \cdot ... \cdot a_t$, when $a_1 + a_2 + ... + a_t = n$. Now let's increase $a_1$ by one and decrease $a_2$ by one. Then the value of $a_1 \cdot a_2 \cdot ... \cdot a_t$ will increase if $a_1 \ge a_2+2$. Eventually this leads to concluding that for a "fixed" $t$ the maximum will occur when $a$'s are as close as possible. </p> <p>Now let's find the optimal $t$. As previously concluded assume that $a_1 = \cdots = a_t = a$. Then we have $a = \frac nt$. Now $a_1 \cdot a_2 \cdot ... \cdot a_t = \left(\frac nt \right)^t$. This function obtains a maximum at $t=e$, so therefore the maximum is obtained for $t = \frac ne$</p> <p>As all the variables are integers and as the functions are continuous we have that the we need to choose $t = \left\lfloor \frac ne \right\rfloor$ or $\left\lfloor \frac ne \right\rfloor + 1$. In each case we choose $a_i = \left\lfloor \frac nt \right\rfloor$ or $\left\lfloor \frac nt \right\rfloor + 1$ accordingly, s.t. the sum is $n$. This reduces the problem to few cases, which can be easily checked. </p>
Closed form for the limit of the iterated sequence $a_{n+1}=\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2}$
2,110,850
7
https://math.stackexchange.com/questions/2110850/closed-form-for-the-limit-of-the-iterated-sequence-a-n1-frac-sqrta-nb-n
<p>Is there a general closed form or the integral representation for the limit of the sequence: $$a_{n+1}=\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}=\frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1}=\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2}$$ in terms of $a_0,b_0,c_0$?</p> <p>$$L(a_0,b_0,c_0)=\lim_{n \to \infty}a_n=\lim_{n \to \infty}b_n=\lim_{n \to \infty}c_n$$</p> <p>For the most simple case $a_0=b_0$ we have some interesting closed forms in terms of inverse hyperbolic or trigomonetric functions:</p> <p>$$L(1,1,\sqrt{2})=\frac{1}{\ln(1+\sqrt{2})}$$</p> <p>$$L(1,1,1/\sqrt{2})=\frac{2 \sqrt{2}}{\pi}$$</p> <p>$$L(1,1,2)=\frac{\sqrt{3}}{\ln(2+\sqrt{3})}$$</p> <p>$$L(1,1,1/2)=\frac{3 \sqrt{3}}{2 \pi }$$</p> <p>$$L(1,1,3)=\frac{\sqrt{2}}{\ln(1+\sqrt{2})}$$</p> <p>$$L(1,1,1/3)=\frac{2 \sqrt{2}}{3 \arcsin \left( \frac{2 \sqrt{2}}{3} \right)}$$</p> <p>$$L(1,1,\sin 1)=\frac{2 \cos 1}{\pi -2}$$</p> <p>$$L(1,1,\sin 1/2)=\frac{2 \cos 1/2}{\pi -1}$$</p> <p>$$L(1,1,\cosh 1)=\sinh 1$$</p> <p>$$L(1,1,\cosh 2)=\frac{\sinh 2}{2}$$</p> <p>These values are obtained by Wolfram Alpha and Inverse Symbolic Calculator and checked with Mathematica.</p> <p>This result seems familiar to me, but I'm pretty sure I don't know how to obtain a general closed form or even integral representation.</p> <hr> <p><a href="https://math.stackexchange.com/q/1902993/269624">This question</a> is closely related, but the sequence is very different.</p> <hr> <p>Update:</p> <p>It seems likely that the closed form for the special case $a_0=b_0=1$ is:</p> <blockquote> <p>$$L(1,1,x)=\frac{\sinh \left(\cosh ^{-1}\left(x\right)\right)}{\cosh ^{-1}\left(x\right)}$$</p> </blockquote> <p>However, the proof would be nice as well as the more general case.</p> <hr> <p>Thanks to Sangchul Lee I now see that this sequence is exactly Carlson's transformation. Change:</p> <p>$$a^2=A,\quad b^2=B,\quad c^2=C$$</p> <p>$$A_{n+1}=\frac{1}{4} (A_n+\sqrt{A_nB_n}+\sqrt{B_nC_n}+\sqrt{C_nA_n})$$</p> <p>$$B_{n+1}=\frac{1}{4} (B_n+\sqrt{A_nB_n}+\sqrt{B_nC_n}+\sqrt{C_nA_n})$$</p> <p>$$C_{n+1}=\frac{1}{4} (C_n+\sqrt{A_nB_n}+\sqrt{B_nC_n}+\sqrt{C_nA_n})$$</p> <p>Accoding to <a href="https://en.wikipedia.org/wiki/Carlson_symmetric_form" rel="nofollow noreferrer">Wikipedia</a> and references wherein, we have:</p> <p>$$R_F(A_{n+1},B_{n+1},C_{n+1})=R_F(A_n,B_n,C_n)$$</p> <p>$$R_F(A,B,C)=\frac{1}{2} \int_0^\infty \frac{dt}{\sqrt{(t+A)(t+B)(t+C)}}$$</p> <p>Which is exactly the 'closed form' I wanted.</p>
[ { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" }, { "answer_id": 2111108, "score": 2, "text": "<p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \\geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p>\n\n<p>$$ a_{n+1} = \\frac{a_n + b_n}{2}, \\qquad b_{n+1} = \\sqrt{a_{n+1}b_n}. $$</p>\n\n<p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p>\n\n<p>$$ \\lim_{n\\to\\infty} a_n = \\lim_{n\\to\\infty} b_n = SB(a_0, b_0)\n= \\begin{cases}\n\\dfrac{\\sqrt{a^2 - b^2}}{\\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\\\\n\\dfrac{\\sqrt{b^2 - a^2}}{\\operatorname{arccos}(a/b)}, &amp; a &lt; b \\\\\na, &amp; a = b\n\\end{cases} \\tag{*} $$</p>\n\n<hr>\n\n<p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href=\"http://www.jstor.org/stable/2317754\" rel=\"nofollow noreferrer\">Carlson's proof of $\\text{(*)}$</a> so that future me do not repeat this mistake.</p>\n\n<p>Define $I(a,b,c)$ by</p>\n\n<p>$$ I(a,b,c) := \\lim_{R\\to\\infty} \\int_{-R}^{R} \\frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p>\n\n<p>where $z^{1/3} = \\exp(\\frac{1}{3}\\log z)$ with the branch cut $-\\pi &lt; \\arg(z) \\leq \\pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \\mapsto 4x + ab + bc + ca$, we find that</p>\n\n<p>$$ I(a, b, c) = I\\left( \\frac{\\sqrt{(a+b)(a+c)}}{2}, \\frac{\\sqrt{(b+c)(b+a)}}{2}, \\frac{\\sqrt{(c+a)(c+b)}}{2} \\right). $$</p>\n\n<p>This tells us that $I(a_n,b_n,c_n) = \\cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href=\"https://en.wikipedia.org/wiki/Landen&#39;s_transformation\" rel=\"nofollow noreferrer\"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p>\n\n<p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p>\n" }, { "answer_id": 2111068, "score": 0, "text": "<p>I can show that\nthe sum of the squares of the terms\ndecreases at each step.</p>\n\n<p>I think this implies convergence,\nbut I am not completely sure.</p>\n\n<p>$a_{n+1}\n=\\frac{\\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\\\ \nb_{n+1}=\n\\frac{\\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\\\ \nc_{n+1}\n=\\frac{\\sqrt{(c_n+a_n)(c_n+b_n)}}{2}\n$</p>\n\n<p>I'm going to play around\nand see if anything interesting happens,\nwith a goal of proving convergence,\nmaybe.</p>\n\n<p>Multiplying, we get</p>\n\n<p>$a_{n+1}b_{n+1}c_{n+1}\n=(a_n+b_n)(a_n+c_n)(b_n+c_n)/8\n=(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8\n$\n(omitting the \"_n\")\nso that</p>\n\n<p>$\\begin{array}\\\\\n\\dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc}\n&amp;=(\\frac{a}{c} + \\frac{a}{b} + \\frac{b}{c} + 2 + \\frac{c}{b} + \\frac{b}{a} + \\frac{c}{a})/8\\\\\n&amp;=(\\frac{a}{c} + \\frac{c}{a} + \\frac{a}{b} + \\frac{b}{a} + \\frac{b}{c}+ \\frac{c}{b} + 2 )/8\\\\\n&amp;=(\\frac{a}{c} -2+ \\frac{c}{a} + \\frac{a}{b}-2 + \\frac{b}{a} + \\frac{b}{c}-2+ \\frac{c}{b} + 8 )/8\\\\\n&amp;=(g(\\dfrac{a}{c}) + g(\\dfrac{a}{b}) + g(\\dfrac{b}{c}))/8+1\n\\qquad\\text{where } g(x) = (\\sqrt{x}-\\dfrac1{\\sqrt{x}})^2=x+\\dfrac1{x}-2\\\\\n\\end{array}\n$</p>\n\n<p>Note that \n$g'(x)\n=1-\\dfrac1{x^2}\n$.</p>\n\n<p>Don't know what to do with this,\nso I'll try something else.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}-a\n&amp;=\\frac{\\sqrt{(a+b)(a+c)}}{2}-a\\\\\n&amp;=a(\\frac{\\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\\\\n\\end{array}\n$</p>\n\n<p>Again nothing.</p>\n\n<p>Let's try this.</p>\n\n<p>Since\n$(a+b+c)^2\n\\le 3(a^2+b^2+c^2)\n$\nwith equality iff\n$a=b=c$,</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n&amp;=(\\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=(\\frac32(a+b+c)^2-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;\\le(\\frac32 3(a^2+b^2+c^2)-\\frac12(a^2+b^2+c^2))/4\\\\\n&amp;=a^2+b^2+c^2\\\\\n\\end{array}\n$</p>\n\n<p>Therefore,\nif at least two of\n$a, b, c$ are distinct,\nthe sum of their squares\ndecreases.</p>\n\n<p>This is not a proof of convergence,\nbut a start.</p>\n\n<p>Let's try a variation on \nthis last result.</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n&amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\\\\n&amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\\\\n&amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\\\\n\\end{array}\n$</p>\n\n<p>so</p>\n\n<p>$\\begin{array}\\\\\na_{n+1}^2+b_{n+1}^2+c_{n+1}^2\n-(a^2+b^2+c^2)\n&amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\\\\n&amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\\\\n&amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\\\\n&amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\\\\n\\end{array}\n$</p>\n\n<p>This shows precisely\nhow the sum of squares\ndecreases at each step.</p>\n\n<p>I think that\nthis implies convergence.</p>\n" } ]
Question: Closed form for the limit of the iterated sequence $a_{n+1}=\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2}$ Answer 1: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 2: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 3: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 4: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 5: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 6: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 7: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 8: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 9: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 10: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 11: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 12: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 13: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 14: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 15: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 16: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 17: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 18: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 19: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 20: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 21: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 22: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 23: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 24: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p> Answer 25: <p>Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation</p> <p>$$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$</p> <p>So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean</p> <p>$$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, &amp; a &gt; b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, &amp; a &lt; b \\ a, &amp; a = b \end{cases} \tag{*} $$</p> <hr> <p>Anyway, let me write down my <em>failed attempt</em> to mimic <a href="http://www.jstor.org/stable/2317754" rel="nofollow noreferrer">Carlson's proof of $\text{(*)}$</a> so that future me do not repeat this mistake.</p> <p>Define $I(a,b,c)$ by</p> <p>$$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$</p> <p>where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi &lt; \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that</p> <p>$$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$</p> <p>This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from <a href="https://en.wikipedia.org/wiki/Landen&#39;s_transformation" rel="nofollow noreferrer"><em>Landen's transformation</em></a>, this may possibly allow us to analyze $L$ using $I$.</p> <p>Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.</p> Answer 26: <p>I can show that the sum of the squares of the terms decreases at each step.</p> <p>I think this implies convergence, but I am not completely sure.</p> <p>$a_{n+1} =\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}= \frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1} =\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2} $</p> <p>I'm going to play around and see if anything interesting happens, with a goal of proving convergence, maybe.</p> <p>Multiplying, we get</p> <p>$a_{n+1}b_{n+1}c_{n+1} =(a_n+b_n)(a_n+c_n)(b_n+c_n)/8 =(a^2 b + a^2 c + a b^2 + 2 a b c + a c^2 + b^2 c + b c^2)/8 $ (omitting the "_n") so that</p> <p>$\begin{array}\\ \dfrac{a_{n+1}b_{n+1}c_{n+1}}{abc} &amp;=(\frac{a}{c} + \frac{a}{b} + \frac{b}{c} + 2 + \frac{c}{b} + \frac{b}{a} + \frac{c}{a})/8\\ &amp;=(\frac{a}{c} + \frac{c}{a} + \frac{a}{b} + \frac{b}{a} + \frac{b}{c}+ \frac{c}{b} + 2 )/8\\ &amp;=(\frac{a}{c} -2+ \frac{c}{a} + \frac{a}{b}-2 + \frac{b}{a} + \frac{b}{c}-2+ \frac{c}{b} + 8 )/8\\ &amp;=(g(\dfrac{a}{c}) + g(\dfrac{a}{b}) + g(\dfrac{b}{c}))/8+1 \qquad\text{where } g(x) = (\sqrt{x}-\dfrac1{\sqrt{x}})^2=x+\dfrac1{x}-2\\ \end{array} $</p> <p>Note that $g'(x) =1-\dfrac1{x^2} $.</p> <p>Don't know what to do with this, so I'll try something else.</p> <p>$\begin{array}\\ a_{n+1}-a &amp;=\frac{\sqrt{(a+b)(a+c)}}{2}-a\\ &amp;=a(\frac{\sqrt{(1+b/a)(1+c/a)}}{2}-1)\\ \end{array} $</p> <p>Again nothing.</p> <p>Let's try this.</p> <p>Since $(a+b+c)^2 \le 3(a^2+b^2+c^2) $ with equality iff $a=b=c$,</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ &amp;=(\frac32(a^2+b^2+c^2+2ab+2ac+2bc)-\frac12(a^2+b^2+c^2))/4\\ &amp;=(\frac32(a+b+c)^2-\frac12(a^2+b^2+c^2))/4\\ &amp;\le(\frac32 3(a^2+b^2+c^2)-\frac12(a^2+b^2+c^2))/4\\ &amp;=a^2+b^2+c^2\\ \end{array} $</p> <p>Therefore, if at least two of $a, b, c$ are distinct, the sum of their squares decreases.</p> <p>This is not a proof of convergence, but a start.</p> <p>Let's try a variation on this last result.</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 &amp;=((a+b)(a+c)+(a+b)(b+c)+(a+c)(b+c))/4\\ &amp;=(a^2+ab+ac+bc+ab+ac+b^2+bc+ab+ac+bc+c^2)/4\\ &amp;=(a^2+b^2+c^2+3ab+3ac+3bc)/4\\ \end{array} $</p> <p>so</p> <p>$\begin{array}\\ a_{n+1}^2+b_{n+1}^2+c_{n+1}^2 -(a^2+b^2+c^2) &amp;=(-3a^2-3b^2-3c^2+3ab+3ac+3bc)/4\\ &amp;=-3(a^2+b^2+c^2-ab-ac-bc)/4\\ &amp;=-3(2a^2+2b^2+2c^2-2ab-2ac-2bc)/8\\ &amp;=-3((a-b)^2+(a-c)^2+(b-c)^2)/8\\ \end{array} $</p> <p>This shows precisely how the sum of squares decreases at each step.</p> <p>I think that this implies convergence.</p>
What exactly is a Non Integrable function? How can they be solved?
2,111,305
7
https://math.stackexchange.com/questions/2111305/what-exactly-is-a-non-integrable-function-how-can-they-be-solved
<p>I was trying to find the integral of $\frac{\sin x}{x}$ recently, and nothing I tried appeared to get me any closer to a solution, so I looked it up and apparently it's a Non-integrable function.</p> <p>What makes a function Non-integrable and is there a way to solve it? Why can't it be solved with the "normal" methods.</p> <p>Forgive my error-ridden terminology, I'm new.</p>
[ { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" }, { "answer_id": 2111336, "score": 16, "text": "<p>\"Integrability\" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \\int_a^b|f(x)|dx &lt; \\infty.$$ By comparison with $1/x,$ $\\sin(x)/x$ fails this criterion on $(-\\infty, \\infty)$ and is thus considered a non-integrable function on $(-\\infty, \\infty).$ This is likely the sense that was meant when you read that $\\sin(x)/x$ is not integrable.</p>\n\n<p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\\sin(x)/x$ oscillates as $|x|\\rightarrow\\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \\int_{-\\infty}^\\infty \\frac{\\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p>\n\n<p>All of this has little to do with your concern of finding an antiderivative for $\\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p>\n\n<p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\\int_a^b\\frac{\\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\\mathrm{Si}(x) = \\int_0^x\\frac{\\sin(t)}{t}dt$ exists and we have $$\\int_a^b\\frac{\\sin(x)}{x}dx = \\mathrm{Si}(b)-\\mathrm{Si}(a).$$ It's just that $\\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\\sin$, $\\cos,$ $\\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p>\n\n<p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p>\n" }, { "answer_id": 2111306, "score": 5, "text": "<p>$\\frac{\\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href=\"http://mathworld.wolfram.com/ElementaryFunction.html\" rel=\"nofollow noreferrer\">elementary function</a>. So its integral defines a special function <a href=\"http://mathworld.wolfram.com/SineIntegral.html\" rel=\"nofollow noreferrer\">Sine integral</a>.</p>\n" }, { "answer_id": 2111309, "score": 3, "text": "<p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href=\"https://en.wikipedia.org/wiki/Risch_algorithm\" rel=\"nofollow noreferrer\">Risch algorithm</a>. $(\\sin x)/x$ is one of those functions without a elementary anti-derivative.</p>\n\n<p>With that being said, $(\\sin x)/x$ still has an anti-derivative, namely\n$$ g(x) = \\int_0^x \\frac{\\sin t}{t} \\mathrm dt\n$$\nThen $g'(x) = (\\sin x)/x$.</p>\n\n<p>In some sense $(\\sin x)/x$ is not integrable, since\n$$ \\int_{-\\infty}^\\infty \\left| \\frac{\\sin x}{x} \\right| \\mathrm dx = +\\infty\n$$\nso the Lebesgue integral of $(\\sin x)/x$ does not exist.</p>\n" } ]
Question: What exactly is a Non Integrable function? How can they be solved? Answer 1: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 2: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 3: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 4: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 5: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 6: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 7: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 8: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 9: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 10: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 11: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 12: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 13: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 14: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 15: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 16: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 17: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 18: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 19: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 20: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 21: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 22: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 23: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 24: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 25: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 26: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 27: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 28: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 29: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 30: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 31: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 32: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 33: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 34: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 35: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 36: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p> Answer 37: <p>"Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx &lt; \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable.</p> <p>However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value.</p> <p>All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks.</p> <p>Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function).</p> <p>Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above. </p> Answer 38: <p>$\frac{\sin x}{x}$ is integrable (since it is continuous), it is just that its indefinite integral cannot be expressed as an <a href="http://mathworld.wolfram.com/ElementaryFunction.html" rel="nofollow noreferrer">elementary function</a>. So its integral defines a special function <a href="http://mathworld.wolfram.com/SineIntegral.html" rel="nofollow noreferrer">Sine integral</a>.</p> Answer 39: <p>Whether the anti-derivative of a function can be expressed in terms of elementary functions is solved by <a href="https://en.wikipedia.org/wiki/Risch_algorithm" rel="nofollow noreferrer">Risch algorithm</a>. $(\sin x)/x$ is one of those functions without a elementary anti-derivative.</p> <p>With that being said, $(\sin x)/x$ still has an anti-derivative, namely $$ g(x) = \int_0^x \frac{\sin t}{t} \mathrm dt $$ Then $g'(x) = (\sin x)/x$.</p> <p>In some sense $(\sin x)/x$ is not integrable, since $$ \int_{-\infty}^\infty \left| \frac{\sin x}{x} \right| \mathrm dx = +\infty $$ so the Lebesgue integral of $(\sin x)/x$ does not exist.</p>
Find the inverse of a lower triangular matrix of ones
2,111,562
7
https://math.stackexchange.com/questions/2111562/find-the-inverse-of-a-lower-triangular-matrix-of-ones
<p>Find the inverse of the matrix $A=(a_{ij})\in M_n$ where $$ \begin{cases} a_{ij}=1, &amp;i\geq j,\\ a_{ij}=0, &amp;i&lt;j. \end{cases} $$</p> <p>The only method for finding inverses that I was taught was by finding the adjugate matrix. So $A^{-1}=\frac{1}{\det A}\operatorname{adj(A)}$</p> <p>$$A=\begin{pmatrix} 1 &amp; 0 &amp;0 &amp;\ldots &amp;0\\ 1 &amp; 1 &amp; 0 &amp;\ldots &amp;0\\ 1 &amp; 1 &amp; 1 &amp;\ldots &amp;0\\ \vdots &amp;\vdots &amp; \vdots &amp;\ddots &amp; \vdots\\ 1 &amp; 1 &amp;1 &amp; \ldots &amp;1 \end{pmatrix}$$</p> <p>This is a triangular matrix so $\det A=1.$ To find the adjugate I first need to find the cofactor matirx$(C)$ of $A$.</p> <p>$$C=\begin{pmatrix} 1&amp;-1&amp;0&amp;0&amp;\ldots &amp;0\\ 0 &amp; 1&amp; -1 &amp;0&amp;\ldots &amp; 0\\ 0 &amp; 0 &amp; 1 &amp; -1&amp;\ldots &amp;0\\ 0 &amp; 0 &amp; 0 &amp;1 &amp;\ldots &amp;0\\ \vdots &amp;\vdots &amp;\vdots &amp;\vdots&amp;\ddots &amp; \vdots\\ 0 &amp; 0 &amp;0 &amp;0 &amp;\ldots &amp;1\end{pmatrix}$$</p> <p>$$C^T=\operatorname{adj}(A)=\begin{pmatrix} 1 &amp; 0 &amp; 0 &amp; 0&amp;\ldots &amp;0\\ -1 &amp; 1&amp; 0 &amp; 0 &amp;\ldots &amp;0\\ 0&amp;-1&amp;1&amp;0&amp;\ldots &amp;0\\ 0&amp; 0 &amp;-1 &amp;1 &amp;\ldots&amp;0\\ \vdots &amp;\vdots &amp;\vdots &amp;\vdots &amp;\ddots&amp;\vdots\\ 0&amp;0&amp;0&amp;0&amp;\ldots &amp;1\end{pmatrix}=A^{-1}$$</p> <p>Is this correct? Also, can I leave it like that or should I somehow write it more formally?</p>
[ { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" }, { "answer_id": 2111901, "score": 6, "text": "<p>I would like to present a very simple solution by interpretation of these matrices as operators on $\\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p>\n\n<p>For any $x_1,x_2,x_3,\\cdots x_n$:</p>\n\n<p>$$\\tag{1}A (x_1,x_2,x_3,\\cdots x_n)^T=(s_1,s_2,s_3,\\cdots s_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\\\s_3&amp;=&amp;x_1+x_2+x_3\\\\...\\end{cases}$$</p>\n\n<p>(1) is equivalent to:</p>\n\n<p>$$\\tag{2}A^{-1} (s_1,s_2,s_3,\\cdots x_n)^T=(x_1,x_2,x_3,\\cdots x_n)^T \\ \\ \\text{with} \\ \\ \\begin{cases}x_1&amp;=&amp; \\ \\ s_1&amp;&amp;&amp;&amp;\\\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\\\...\\end{cases}$$</p>\n\n<p>and it suffices now to \"collect the coefficients\" in the right order in order to constitute the inverse matrix.</p>\n\n<p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p>\n" }, { "answer_id": 2111733, "score": 4, "text": "<p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix.\n<hr>\n Denote $A_n$ as the $n \\times n$ matrix that has elements $a_{ij} = 1$ if $i \\geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \\ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \\prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \\left( A_n \\right)^{-1} = \\prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br>\n$$B_n = \\begin{pmatrix}\n\\phantom{-}1 &amp; \\phantom{-}0 &amp;\\phantom{-}0 &amp;\\ldots &amp;0\\\\\n-1 &amp; \\phantom{-}1 &amp; \\phantom{-}0 &amp;\\ldots &amp;0\\\\\n\\phantom{-}0 &amp; -1 &amp; \\phantom{-}1 &amp;\\ldots &amp;0\\\\\n\\phantom{-}\\vdots &amp;\\phantom{-}\\vdots &amp; \\phantom{-}\\vdots &amp;\\ddots &amp; \\vdots\\\\\n\\phantom{-}0 &amp; \\phantom{-}0 &amp; \\phantom{-}0 &amp; \\cdots &amp;1 \\end{pmatrix}.$$\nIn order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p>\n" }, { "answer_id": 2111834, "score": 3, "text": "<p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p>\n\n<p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form\n$$\nA_n=\\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\\\1_{n-1}^T&amp;1\\end{bmatrix}.\n$$\nYou can easily verify that\n$$\nA_n^{-1}=\\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\\end{bmatrix}\n$$\nis the inverse of $A_n$.\nBy the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So\n$$\nA_n^{-1}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-1_{n-1}^TB_{n-1}&amp;1\\end{bmatrix}=\\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\\\-e_{n-1}^T&amp;1\\end{bmatrix}=B_n.\n$$</p>\n" }, { "answer_id": 2111993, "score": 1, "text": "<p>Just to show a different approach.<br>\nConsider the matrix $\\mathbf E$, having $1$ only on the first subdiagonal\n$$\n\\mathbf{E} = \\left\\| {\\,e_{\\,n,\\,m} = \\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 1} \\\\\n 0 &amp; {n \\ne m + 1} \\\\\n \\end{array} } \\right.\\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 1 \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\n$$\nMultiply it by $\\mathbf A$ , and it is easily seen that $$\\mathbf E \\, \\mathbf A=\\mathbf A-\\mathbf I \\quad \\Rightarrow \\quad \\left( \\mathbf I -\\mathbf E \\right)\\,\\mathbf A=\\mathbf I$$</p>\n\n<p>In another way, consider that the powers of $\\mathbf E$ are readily found and have a simple formulation\n$$\n\\begin{gathered}\n \\mathbf{E}^{\\,\\mathbf{2}} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - 1 \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - 2 \\\\ \n\\end{gathered} \\right)\\;} \\right\\| = \\left\\| {\\,\\left\\{ {\\begin{array}{*{20}c}\n 1 &amp; {n = m + 2} \\\\\n 0 &amp; {n \\ne m + 2} \\\\\n \\end{array} } \\right.\\;} \\right\\| \\hfill \\\\\n \\quad \\vdots \\hfill \\\\\n \\mathbf{E}^{\\,\\mathbf{q}} = \\mathbf{E}^{\\,\\mathbf{q} - \\mathbf{1}} \\,\\mathbf{E} = \\left\\| {\\,\\sum\\limits_{0\\, \\leqslant \\,k\\,\\left( { \\leqslant \\,n - 1} \\right)} {\\left( \\begin{gathered}\n 0 \\\\ \n n - k - \\left( {q - 1} \\right) \\\\ \n\\end{gathered} \\right)\\left( \\begin{gathered}\n 0 \\\\ \n k - m - 1 \\\\ \n\\end{gathered} \\right)} \\;} \\right\\| = \\hfill \\\\\n = \\left\\| {\\,\\left( \\begin{gathered}\n 0 \\\\ \n n - m - q \\\\ \n\\end{gathered} \\right)\\;} \\right\\|\\quad \\left| {\\;0 \\leqslant \\text{integer}\\;q} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nTherefore\n$$\n\\mathbf{A} = \\sum\\limits_{0\\, \\leqslant \\,j} {\\mathbf{E}^{\\,\\mathbf{j}} } = \\frac{\\mathbf{I}}\n{{\\mathbf{I} - \\mathbf{E}}}\n$$\nTo connect to the answer of <em>Jean Marie</em>, note that \n$$\n\\left( {\\mathbf{I} - \\mathbf{E}} \\right)\\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} } \\\\\n {x_{\\,1} } \\\\\n \\vdots \\\\\n {x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {x_{\\,0} ( - 0)} \\\\\n {x_{\\,1} - x_{\\,0} } \\\\\n \\vdots \\\\\n {x_{\\,n} - x_{\\,n - 1} } \\\\\n \\end{array} \\;} \\right\\| = \\left\\| {\\,\\begin{array}{*{20}c}\n {\\nabla x_{\\,0} \\;\\left| {x_{\\, - 1} = 0} \\right.} \\\\\n {\\nabla x_{\\,1} } \\\\\n \\vdots \\\\\n {\\nabla x_{\\,n} } \\\\\n \\end{array} \\;} \\right\\|\n$$</p>\n" } ]
Question: Find the inverse of a lower triangular matrix of ones Answer 1: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 2: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 3: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 4: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 5: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 6: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 7: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 8: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 9: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 10: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 11: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 12: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 13: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 14: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 15: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 16: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 17: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 18: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 19: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 20: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 21: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 22: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 23: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 24: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 25: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 26: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 27: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 28: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 29: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 30: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 31: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 32: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 33: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 34: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 35: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 36: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 37: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 38: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 39: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 40: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 41: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 42: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 43: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 44: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 45: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 46: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 47: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 48: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p> Answer 49: <p>I would like to present a very simple solution by interpretation of these matrices as operators on $\mathbb{R^n}$ (which will surprise nobody...). Triangular matrix $A$ acts as a <strong>discrete integration operator</strong>: </p> <p>For any $x_1,x_2,x_3,\cdots x_n$:</p> <p>$$\tag{1}A (x_1,x_2,x_3,\cdots x_n)^T=(s_1,s_2,s_3,\cdots s_n)^T \ \ \text{with} \ \ \begin{cases}s_1&amp;=&amp;x_1&amp;&amp;&amp;&amp;\\s_2&amp;=&amp;x_1+x_2&amp;&amp;\\s_3&amp;=&amp;x_1+x_2+x_3\\...\end{cases}$$</p> <p>(1) is equivalent to:</p> <p>$$\tag{2}A^{-1} (s_1,s_2,s_3,\cdots x_n)^T=(x_1,x_2,x_3,\cdots x_n)^T \ \ \text{with} \ \ \begin{cases}x_1&amp;=&amp; \ \ s_1&amp;&amp;&amp;&amp;\\x_2&amp;=&amp;-s_1&amp;+&amp;s_2&amp;&amp;\\x_3&amp;=&amp;&amp;&amp;-s_2&amp;+&amp;s_3\\...\end{cases}$$</p> <p>and it suffices now to "collect the coefficients" in the right order in order to constitute the inverse matrix.</p> <p>(Thus the inverse operation is - in a natural way - a <strong>discrete derivation operator</strong>).</p> Answer 50: <p>Yes, your inverse is correct. Let me show you an alternative approach of finding this inverse without the use of the adjugate matrix. <hr> Denote $A_n$ as the $n \times n$ matrix that has elements $a_{ij} = 1$ if $i \geq j$, and $0$ otherwise. Then, $A_n$ can be constructed by applying the elementary row operations $$R_2 + R_1, R_3 + R_2, \ldots, R_n - R_{n-1}$$ to the identity matrix $I_n$. So, $$A_n = \prod_{i=1}^{n-1} E_i,$$ where $E_i$ is the elementary matrix representing the row operation $R_{i+1} - R_i$. Since det$(A)$ is nonzero, it is invertible, hence $$B_n = \left( A_n \right)^{-1} = \prod_{i = n-1}^{1} {E_i}^{-1},$$ where ${E_i}^{-1}$ is the row operation $R_{i+1} - R_i$. That is, $B_n$ can be constructed by applying the row operations $$R_n - R_{n-1}, R_{n-1} - R_{n-2},\ldots,R_2 - R_1$$ to the idenity matrix $I_n$. The inverse of $A_n$ is therefore given by the matrix<br> $$B_n = \begin{pmatrix} \phantom{-}1 &amp; \phantom{-}0 &amp;\phantom{-}0 &amp;\ldots &amp;0\\ -1 &amp; \phantom{-}1 &amp; \phantom{-}0 &amp;\ldots &amp;0\\ \phantom{-}0 &amp; -1 &amp; \phantom{-}1 &amp;\ldots &amp;0\\ \phantom{-}\vdots &amp;\phantom{-}\vdots &amp; \phantom{-}\vdots &amp;\ddots &amp; \vdots\\ \phantom{-}0 &amp; \phantom{-}0 &amp; \phantom{-}0 &amp; \cdots &amp;1 \end{pmatrix}.$$ In order to check that this is the correct inverse, you can use the identity $$A_nB_n = I_n.$$</p> Answer 51: <p>You can also use the induction along the dimension $n$ (leading to a simple but not very constructive proof though). </p> <p>Let $A_n$ denote the matrix $A$ of dimension $n$ and $B_n$ its inverse. The statement is obviously true for $n=1$ and reduces to showing that $1=1$. Let the statement be true for $n-1$, that is, $A_{n-1}^{-1}=B_{n-1}$. If $1_{n-1}$ and $0_{n-1}$ the column vectors of ones and zeros, respectively, of dimension $n-1$, the matrix $A_n$ can be written in the block form $$ A_n=\begin{bmatrix}A_{n-1}&amp;0_{n-1}\\1_{n-1}^T&amp;1\end{bmatrix}. $$ You can easily verify that $$ A_n^{-1}=\begin{bmatrix}A_{n-1}^{-1}&amp;0_{n-1}\\-1_{n-1}^TA_{n-1}^{-1}&amp;1\end{bmatrix} $$ is the inverse of $A_n$. By the induction assumption, $A_{n-1}^{-1}=B_{n-1}$. It is easy to check that $1_{n-1}^TA_{n-1}^{-1}=1_{n-1}^TB_{n-1}=e_{n-1}^T:=[0,\ldots,0,1]^T$ (sum of the rows of $B_{n-1}$). So $$ A_n^{-1}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-1_{n-1}^TB_{n-1}&amp;1\end{bmatrix}=\begin{bmatrix}B_{n-1}&amp;0_{n-1}\\-e_{n-1}^T&amp;1\end{bmatrix}=B_n. $$</p> Answer 52: <p>Just to show a different approach.<br> Consider the matrix $\mathbf E$, having $1$ only on the first subdiagonal $$ \mathbf{E} = \left\| {\,e_{\,n,\,m} = \left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 1} \\ 0 &amp; {n \ne m + 1} \\ \end{array} } \right.\;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 1 \\ \end{gathered} \right)\;} \right\| $$ Multiply it by $\mathbf A$ , and it is easily seen that $$\mathbf E \, \mathbf A=\mathbf A-\mathbf I \quad \Rightarrow \quad \left( \mathbf I -\mathbf E \right)\,\mathbf A=\mathbf I$$</p> <p>In another way, consider that the powers of $\mathbf E$ are readily found and have a simple formulation $$ \begin{gathered} \mathbf{E}^{\,\mathbf{2}} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - 1 \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \left\| {\,\left( \begin{gathered} 0 \\ n - m - 2 \\ \end{gathered} \right)\;} \right\| = \left\| {\,\left\{ {\begin{array}{*{20}c} 1 &amp; {n = m + 2} \\ 0 &amp; {n \ne m + 2} \\ \end{array} } \right.\;} \right\| \hfill \\ \quad \vdots \hfill \\ \mathbf{E}^{\,\mathbf{q}} = \mathbf{E}^{\,\mathbf{q} - \mathbf{1}} \,\mathbf{E} = \left\| {\,\sum\limits_{0\, \leqslant \,k\,\left( { \leqslant \,n - 1} \right)} {\left( \begin{gathered} 0 \\ n - k - \left( {q - 1} \right) \\ \end{gathered} \right)\left( \begin{gathered} 0 \\ k - m - 1 \\ \end{gathered} \right)} \;} \right\| = \hfill \\ = \left\| {\,\left( \begin{gathered} 0 \\ n - m - q \\ \end{gathered} \right)\;} \right\|\quad \left| {\;0 \leqslant \text{integer}\;q} \right. \hfill \\ \end{gathered} $$ Therefore $$ \mathbf{A} = \sum\limits_{0\, \leqslant \,j} {\mathbf{E}^{\,\mathbf{j}} } = \frac{\mathbf{I}} {{\mathbf{I} - \mathbf{E}}} $$ To connect to the answer of <em>Jean Marie</em>, note that $$ \left( {\mathbf{I} - \mathbf{E}} \right)\left\| {\,\begin{array}{*{20}c} {x_{\,0} } \\ {x_{\,1} } \\ \vdots \\ {x_{\,n} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {x_{\,0} ( - 0)} \\ {x_{\,1} - x_{\,0} } \\ \vdots \\ {x_{\,n} - x_{\,n - 1} } \\ \end{array} \;} \right\| = \left\| {\,\begin{array}{*{20}c} {\nabla x_{\,0} \;\left| {x_{\, - 1} = 0} \right.} \\ {\nabla x_{\,1} } \\ \vdots \\ {\nabla x_{\,n} } \\ \end{array} \;} \right\| $$</p>
If every element of R has a (multiplicative) inverse, then $R = \{0\}$
2,111,594
7
https://math.stackexchange.com/questions/2111594/if-every-element-of-r-has-a-multiplicative-inverse-then-r-0
<p>I am struggling to understand why is this the case. I need to prove this, but I don't understand how it's true. For example, if every non-zero element of $R$ has a multiplicative inverse, then it's a field. So how does $R=\{0\}$?</p> <p>Thanks you for your time :)</p>
[ { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" }, { "answer_id": 2111595, "score": 10, "text": "<p>We have $0 \\cdot x = 1$ for some $x \\in R$, so $1 = 0$, and it follows that for $x\\in R$, $x = x\\cdot 1 = x \\cdot 0 = 0$. </p>\n" }, { "answer_id": 2111619, "score": 3, "text": "<p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>\n" } ]
Question: If every element of R has a (multiplicative) inverse, then $R = \{0\}$ Answer 1: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 2: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 3: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 4: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 5: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 6: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 7: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 8: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 9: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 10: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 11: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 12: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 13: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 14: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 15: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 16: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 17: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 18: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 19: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 20: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 21: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 22: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 23: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 24: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p> Answer 25: <p>We have $0 \cdot x = 1$ for some $x \in R$, so $1 = 0$, and it follows that for $x\in R$, $x = x\cdot 1 = x \cdot 0 = 0$. </p> Answer 26: <p>First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}</p>
Cover for Cech cohomology of the constant sheaf $\mathbb{Z}$ over $\mathbb{P}^n$
2,112,165
7
https://math.stackexchange.com/questions/2112165/cover-for-cech-cohomology-of-the-constant-sheaf-mathbbz-over-mathbbpn
<p>First of all consider $\mathbb{P}^n$ as a complex analytic manifold. In Griffiths and Harris's Principles of Algebraic geometry p.145 it is stated $$ H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}, $$ that is, the second Cech cohomology group of $\mathbb{P}^n$ over the constant sheaf $\mathbb{Z}$ is isomorphic to $\mathbb{Z}$.</p> <p>I wanted to check this using the usual algebraic cover of open sets: $\mathcal{U} = \{ U_i \}_{0 \le i \le n}$, where $U_i = \{ x_i \neq 0 \}$, but already for $\mathbb{P}^1$ this fails, meaning that $\mathcal{U}$ is not fine enough for computing Cech cohomology.</p> <p><strong>Edit:</strong> As Rene remarked in the answer below, there is a theorem of Leray which states that given a sheaf $\mathcal{F}$ and a cover $\mathcal{U}= \{ U_i \}$ such that $$ H^p(U_{i_0} \cap \dotsb \cap U_{i_k}, \mathcal{F}) = 0 $$ for all finite intersections of $\mathcal{U}$ and all $p$, then $$ H^p(\mathcal{U},\mathcal{F}) \rightarrow H^p(X, \mathcal{F}) $$ is an isomorphism for all $p$.</p> <p>This still gives rise to questions:</p> <ol> <li>Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</li> <li>Is there any intuition on how to choose such covers?</li> <li>Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$ ?</li> </ol>
[ { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" }, { "answer_id": 2112595, "score": 2, "text": "<p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for \n$$H^n(\\mathcal{U}, \\mathcal{F})\\rightarrow H^n(X,\\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p>\n\n<p>$$H^n(U_{i_0}\\cap \\cdots \\cap U_{i_k},\\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p>\n" }, { "answer_id": 2119151, "score": 2, "text": "<blockquote>\n <p>$\\text{}$1. Is there a standard, or at least well known, cover of $\\mathbb{P}^n$ satisfying this condition?</p>\n</blockquote>\n\n<p>Not an explicit one as far as I know.</p>\n\n<blockquote>\n <p>$\\text{}$2. Is there any intuition on how to choose such covers?</p>\n</blockquote>\n\n<p>Any covering consisting of geodesically convex sets works.</p>\n\n<blockquote>\n <p>$\\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\\mathbb{P}^n, \\bf{\\mathbb{Z}}) \\cong \\mathbb{Z}$?</p>\n</blockquote>\n\n<p>The simplest approach in my opinion is to use the cell complex structure of $\\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>\n" } ]
Question: Cover for Cech cohomology of the constant sheaf $\mathbb{Z}$ over $\mathbb{P}^n$ Answer 1: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 2: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 3: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 4: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 5: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 6: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 7: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 8: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 9: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 10: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 11: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 12: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 13: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 14: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 15: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 16: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 17: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 18: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 19: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 20: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 21: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 22: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 23: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 24: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p> Answer 25: <p>I hope that this will not be the only answer to this question, but I'd like the point out that the theorem of Leray gives sufficent conditions for $$H^n(\mathcal{U}, \mathcal{F})\rightarrow H^n(X,\mathcal{F})$$ to be an isomorphism for all $n$, namely that for all $n$</p> <p>$$H^n(U_{i_0}\cap \cdots \cap U_{i_k},\mathcal{F})=0$$ for all finite intersection of open sets from the cover. This is not satisfied by the open covering given in the question.</p> Answer 26: <blockquote> <p>$\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition?</p> </blockquote> <p>Not an explicit one as far as I know.</p> <blockquote> <p>$\text{}$2. Is there any intuition on how to choose such covers?</p> </blockquote> <p>Any covering consisting of geodesically convex sets works.</p> <blockquote> <p>$\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$?</p> </blockquote> <p>The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.</p>
Objects are shuffled. What is the probability that exactly one object remains in its original position?
2,112,338
7
https://math.stackexchange.com/questions/2112338/objects-are-shuffled-what-is-the-probability-that-exactly-one-object-remains-in
<p>We have a deck with $n$ cards enumerated $1,2,\ldots,n$. The deck is shuffled. What is the probability of exactly one card to remain on its original position? What is the limit as $n$ rises to infinity?</p> <p>$$ \begin{array}{rcl} \{1\} &amp; : &amp; \dfrac 11 \\[6pt] \{12,21\} &amp; : &amp; \dfrac 02 \\[6pt] \{123,132,213,231,312,321\} &amp; : &amp; \dfrac 36 \\[6pt] &amp; \vdots \end{array} $$</p> <p>At $n = 100 0$ and $10 000$ trials: </p> <p>$$ \begin{array}{rcl} \text{value of }n &amp; &amp; \text{probability} \\ \hline 1000 &amp; &amp; 0.3739 \\ 1001 &amp; &amp; 0.3689 \\ 1002 &amp; &amp; 0.3722 \\ 1003 &amp; &amp; 0.3638 \\ 1004 &amp; &amp; 0.3707 \\ 1005 &amp; &amp; 0.3664 \\ 1006 &amp; &amp; 0.3616 \\ 1007 &amp; &amp; 0.3728 \\ 1008 &amp; &amp; 0.3702 \\ 1009 &amp; &amp; 0.3801 \end{array} $$</p> <p>At $n = 100 000$ and $10 000$ trials: </p> <blockquote> <p>$\text{value of } n \quad \text{probability}$ </p> <p>$\quad 100000 \quad \quad 0.3659$</p> <p>$\quad 100001 \quad \quad 0.3552$</p> <p>$\quad 100002 \quad \quad 0.356$</p> <p>$\quad 100003 \quad \quad 0.367$</p> <p>$\quad 100004 \quad \quad 0.3738$</p> <p>$\quad 100005 \quad \quad 0.3647$</p> <p>$\quad 100006 \quad \quad 0.3654$</p> <p>$\quad 100007 \quad \quad 0.3637$</p> <p>$\quad 100008 \quad \quad 0.3718$</p> <p>$\quad 100009 \quad \quad 0.3708$</p> </blockquote> <p>Apparently, probability approaches $0.36-0.38$, but how can one derive it analytically?</p>
[ { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" }, { "answer_id": 2112341, "score": 15, "text": "<p>To find the number of \"good\" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a> of $n$ objects). Then the number of \"good\" permutations is $n\\cdot !(n-1)$. Hence we have:</p>\n\n<p>$$\\lim_{n \\to \\infty} p_n = \\lim_{n \\to \\infty} \\frac{n\\cdot !(n-1)}{n!} = \\lim_{n \\to \\infty} \\frac{!(n-1)}{(n-1)!} = e^{-1}$$</p>\n\n<p>The last limit can be seen in the link I added above.</p>\n" }, { "answer_id": 2112345, "score": 11, "text": "<p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are\n$$ n!\\sum_{k=0}^{n}\\frac{(-1)^k}{k!} $$\npermutations without fixed points (see <a href=\"https://en.wikipedia.org/wiki/Derangement\">derangements</a>). It follows that in the same group there are\n$$ n\\cdot (n-1)!\\sum_{k=0}^{n-1}\\frac{(-1)^k}{k!} $$\nelements with exactly one fixed point, and the limit probability is $\\color{red}{\\large\\frac{1}{e}}$ in both cases.</p>\n" } ]
Question: Objects are shuffled. What is the probability that exactly one object remains in its original position? Answer 1: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 2: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 3: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 4: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 5: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 6: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 7: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 8: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 9: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 10: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 11: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 12: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 13: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 14: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 15: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 16: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 17: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 18: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 19: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 20: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 21: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 22: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 23: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 24: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p> Answer 25: <p>To find the number of "good" permutations fix one card and derange the rest $(n-1)$ cards. This can be done in $!n$ ways. ($!n$ is the number of <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a> of $n$ objects). Then the number of "good" permutations is $n\cdot !(n-1)$. Hence we have:</p> <p>$$\lim_{n \to \infty} p_n = \lim_{n \to \infty} \frac{n\cdot !(n-1)}{n!} = \lim_{n \to \infty} \frac{!(n-1)}{(n-1)!} = e^{-1}$$</p> <p>The last limit can be seen in the link I added above.</p> Answer 26: <p>By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see <a href="https://en.wikipedia.org/wiki/Derangement">derangements</a>). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.</p>
Show that $a^{61} \equiv a\pmod{1001}$ for every $a \in \mathbb{N}$
2,112,666
7
https://math.stackexchange.com/questions/2112666/show-that-a61-equiv-a-pmod1001-for-every-a-in-mathbbn
<p>I'm asked to show that <span class="math-container">$a^{61} \equiv a\pmod{1001}$</span> for every <span class="math-container">$a \in \mathbb{N}$</span>. I've tried to tackle this using Fermat's Little Theorem and Euler's Theorem, but I can't even get started. My main problem seems to be the "for every <span class="math-container">$a \in \mathbb{N}$</span>" part, because if it restricted <span class="math-container">$\gcd(a, 1001) = 1$</span> I would be able to factor out 1001 and try to show that it's factors divide <span class="math-container">$a^{61} - a$</span>. I'm kinda lost here. Any help is appreciated.</p>
[ { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" }, { "answer_id": 2112671, "score": 5, "text": "<p><strong>Hint</strong> <span class=\"math-container\">$\\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class=\"math-container\">$\\rm\\:n = 1001 = \\color{#C00}7\\cdot\\color{#0A0}{11}\\cdot\\color{brown}{13},\\:$</span> is squarefree, it suffices to check <span class=\"math-container\">$\\rm\\:\\color{#C00}6,\\color{#0A0}{10},\\color{brown}{12}\\:|\\:61\\!-\\!1 = e\\!-\\!1.$</span></p>\n<p><strong>Theorem</strong> <span class=\"math-container\">$\\ $</span> For natural numbers <span class=\"math-container\">$\\rm\\:a,e,n\\:$</span> with <span class=\"math-container\">$\\rm\\:e,n&gt;1$</span></p>\n<p><span class=\"math-container\">$\\qquad\\rm n\\:|\\:a^{\\large e}-a\\:$</span> for all <span class=\"math-container\">$\\rm\\:a\\:\\iff n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: \\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ (\\Leftarrow)\\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class=\"math-container\">$\\rm\\:p\\:|\\:a^{\\large e}\\!-\\!a\\:$</span> for each prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n.\\:$</span> It's clear if <span class=\"math-container\">$\\rm\\,p \\mid a.\\,$</span> Else <span class=\"math-container\">$\\rm\\!\\bmod p\\!:\\ a \\not\\equiv 0\\,$</span> <span class=\"math-container\">$\\rm\\overset{\\rm Fermat}\\Longrightarrow\\, \\color{c00}{a^{\\large\\color{#0a0}{p-1}} \\equiv 1}$</span> so <span class=\"math-container\">$\\rm\\,\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}\\equiv 1}\\,$</span> by <span class=\"math-container\">$\\rm \\,\\color{#0a0}{p\\!-\\!1\\mid e\\!-\\!1}\\,$</span> and <a href=\"https://math.stackexchange.com/a/2033681/242\">modular order reduction</a>. Thus <span class=\"math-container\">$\\rm\\,a^{\\large e}-a\\equiv a(\\color{#c00}{a^{\\large\\color{#0a0}{e-1}}-1})\\equiv a(\\color{#c00}0)\\equiv 0$</span>. <span class=\"math-container\">$\\ \\ (\\Rightarrow)\\ \\ $</span> See <a href=\"https://math.stackexchange.com/a/164568/242\">here</a> (not used here).</p>\n<p><strong>Corollary</strong> <span class=\"math-container\">$\\rm\\,\\ n\\mid a^e b - a b^f\\ $</span> if <span class=\"math-container\">$\\,\\rm n\\:$</span> is squarefree, and prime <span class=\"math-container\">$\\rm\\:p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e\\!-\\!1,\\,f\\!-\\!1$</span></p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\rm\\bmod n\\!:\\,\\ a^e\\equiv a,\\, b^f\\equiv b\\,$</span> so <span class=\"math-container\">$\\rm\\,a^e b - ab^f\\equiv ab-ab\\equiv 0$</span></p>\n<p><strong>Corollary'</strong> <span class=\"math-container\">$\\ n\\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\\,$</span> if <span class=\"math-container\">$\\,n\\,$</span> is squarefree, and prime <span class=\"math-container\">$\\,p\\:|\\:n\\:\\Rightarrow\\: p\\!-\\!1\\:|\\:e_i\\!-\\!1\\,$</span> and <span class=\"math-container\">$\\,f(x,y)\\in\\Bbb Z[x,y],\\,$</span> i.e. <span class=\"math-container\">$\\,f\\,$</span> is a polynomial in <span class=\"math-container\">$\\,x,y\\,$</span> with integer coefficients.</p>\n<p><strong>Proof</strong> <span class=\"math-container\">$\\ $</span> By the Theorem <span class=\"math-container\">$\\bmod n\\!:\\,\\ a^{e_i}\\equiv a,\\, b^{e_i}\\equiv b\\,$</span> so <span class=\"math-container\">$f(a^{e_i},b^{e_i}) \\equiv f(a,b)\\,$</span> by the <a href=\"https://math.stackexchange.com/a/879262/242\">Polynomial Congruence Rule</a>.</p>\n" }, { "answer_id": 3540835, "score": 2, "text": "<p>HINT.-<span class=\"math-container\">$$\\begin{cases}a^{61}=(a^7)^8\\cdot a^5\\equiv a^8\\cdot a^5\\equiv a^7\\equiv a\\mod7\\\\a^{61}=(a^{11})^5\\cdot a^6\\equiv a^ {11}\\equiv a\\mod 11\\\\a^{61}=(a^{13})^4\\cdot a^9\\equiv a^{13}\\equiv a\\mod13\\end{cases}$$</span></p>\n\n<p>Thus <span class=\"math-container\">$$a^{61}-a=7m_1=11m_2=13m_3\\Rightarrow?$$</span></p>\n" }, { "answer_id": 2112669, "score": 1, "text": "<p>Which version of Fermat's Little Theorem are you using? If you use</p>\n\n<blockquote>\n <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\\equiv a\\pmod p$ for every integer $a$</p>\n</blockquote>\n\n<p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p>\n\n<p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p>\n" }, { "answer_id": 2112703, "score": 0, "text": "<p>The smallest positive integer greater than $k$ such that $a^k \\equiv 1 \\bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href=\"https://en.wikipedia.org/wiki/Carmichael_function\" rel=\"nofollow noreferrer\">this wikipedia page</a>.</p>\n\n<p>In the case in which $n$ is square free, $\\lambda(n)+1$ is the smallest number such that $a^{\\lambda(n)}\\equiv a \\bmod n$ for all $a$. We have that $\\lambda(10001)+1=61$</p>\n" } ]
Question: Show that $a^{61} \equiv a\pmod{1001}$ for every $a \in \mathbb{N}$ Answer 1: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 2: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 3: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 4: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 5: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 6: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 7: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 8: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 9: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 10: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 11: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 12: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 13: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 14: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 15: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 16: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 17: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 18: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 19: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 20: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 21: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 22: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 23: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 24: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 25: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 26: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 27: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 28: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 29: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 30: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 31: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 32: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 33: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 34: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 35: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 36: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 37: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 38: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 39: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 40: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 41: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 42: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 43: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 44: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 45: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 46: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 47: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 48: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p> Answer 49: <p><strong>Hint</strong> <span class="math-container">$\ $</span> Apply the following simple generalization of the little Fermat &amp; Euler phi theorem. Since <span class="math-container">$\rm\:n = 1001 = \color{#C00}7\cdot\color{#0A0}{11}\cdot\color{brown}{13},\:$</span> is squarefree, it suffices to check <span class="math-container">$\rm\:\color{#C00}6,\color{#0A0}{10},\color{brown}{12}\:|\:61\!-\!1 = e\!-\!1.$</span></p> <p><strong>Theorem</strong> <span class="math-container">$\ $</span> For natural numbers <span class="math-container">$\rm\:a,e,n\:$</span> with <span class="math-container">$\rm\:e,n&gt;1$</span></p> <p><span class="math-container">$\qquad\rm n\:|\:a^{\large e}-a\:$</span> for all <span class="math-container">$\rm\:a\:\iff n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: \color{#0a0}{p\!-\!1\mid e\!-\!1}$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ (\Leftarrow)\ $</span> By unique prime factorization (or Euclid's Lemma), a squarefree natural divides another iff all its prime factors do, so we need only show <span class="math-container">$\rm\:p\:|\:a^{\large e}\!-\!a\:$</span> for each prime <span class="math-container">$\rm\:p\:|\:n.\:$</span> It's clear if <span class="math-container">$\rm\,p \mid a.\,$</span> Else <span class="math-container">$\rm\!\bmod p\!:\ a \not\equiv 0\,$</span> <span class="math-container">$\rm\overset{\rm Fermat}\Longrightarrow\, \color{c00}{a^{\large\color{#0a0}{p-1}} \equiv 1}$</span> so <span class="math-container">$\rm\,\color{#c00}{a^{\large\color{#0a0}{e-1}}\equiv 1}\,$</span> by <span class="math-container">$\rm \,\color{#0a0}{p\!-\!1\mid e\!-\!1}\,$</span> and <a href="https://math.stackexchange.com/a/2033681/242">modular order reduction</a>. Thus <span class="math-container">$\rm\,a^{\large e}-a\equiv a(\color{#c00}{a^{\large\color{#0a0}{e-1}}-1})\equiv a(\color{#c00}0)\equiv 0$</span>. <span class="math-container">$\ \ (\Rightarrow)\ \ $</span> See <a href="https://math.stackexchange.com/a/164568/242">here</a> (not used here).</p> <p><strong>Corollary</strong> <span class="math-container">$\rm\,\ n\mid a^e b - a b^f\ $</span> if <span class="math-container">$\,\rm n\:$</span> is squarefree, and prime <span class="math-container">$\rm\:p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e\!-\!1,\,f\!-\!1$</span></p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\rm\bmod n\!:\,\ a^e\equiv a,\, b^f\equiv b\,$</span> so <span class="math-container">$\rm\,a^e b - ab^f\equiv ab-ab\equiv 0$</span></p> <p><strong>Corollary'</strong> <span class="math-container">$\ n\mid f(a^{e_1},b^{e_2}) - f(a^{e_3},b^{e_4})\,$</span> if <span class="math-container">$\,n\,$</span> is squarefree, and prime <span class="math-container">$\,p\:|\:n\:\Rightarrow\: p\!-\!1\:|\:e_i\!-\!1\,$</span> and <span class="math-container">$\,f(x,y)\in\Bbb Z[x,y],\,$</span> i.e. <span class="math-container">$\,f\,$</span> is a polynomial in <span class="math-container">$\,x,y\,$</span> with integer coefficients.</p> <p><strong>Proof</strong> <span class="math-container">$\ $</span> By the Theorem <span class="math-container">$\bmod n\!:\,\ a^{e_i}\equiv a,\, b^{e_i}\equiv b\,$</span> so <span class="math-container">$f(a^{e_i},b^{e_i}) \equiv f(a,b)\,$</span> by the <a href="https://math.stackexchange.com/a/879262/242">Polynomial Congruence Rule</a>.</p> Answer 50: <p>HINT.-<span class="math-container">$$\begin{cases}a^{61}=(a^7)^8\cdot a^5\equiv a^8\cdot a^5\equiv a^7\equiv a\mod7\\a^{61}=(a^{11})^5\cdot a^6\equiv a^ {11}\equiv a\mod 11\\a^{61}=(a^{13})^4\cdot a^9\equiv a^{13}\equiv a\mod13\end{cases}$$</span></p> <p>Thus <span class="math-container">$$a^{61}-a=7m_1=11m_2=13m_3\Rightarrow?$$</span></p> Answer 51: <p>Which version of Fermat's Little Theorem are you using? If you use</p> <blockquote> <p><strong>Theorem</strong>. Let $p$ be prime. Then $a^p\equiv a\pmod p$ for every integer $a$</p> </blockquote> <p>you should find the problem works out quite simply. (Use a factorisation of $1001$, as you suggested.)</p> <p>You can regard the above result as an alternative statement of Fermat's Little Theorem, or as a corollary of Fermat's Little Theorem.</p> Answer 52: <p>The smallest positive integer greater than $k$ such that $a^k \equiv 1 \bmod n$ for all $a$ coprime to $n$ is given by the charmicael lambda function. It is easy to caclulate it, as explained in <a href="https://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow noreferrer">this wikipedia page</a>.</p> <p>In the case in which $n$ is square free, $\lambda(n)+1$ is the smallest number such that $a^{\lambda(n)}\equiv a \bmod n$ for all $a$. We have that $\lambda(10001)+1=61$</p>
if : $abc=8 $ then : $(a+1)(b+1)(c+1)≥27$
2,113,791
7
https://math.stackexchange.com/questions/2113791/if-abc-8-then-a1b1c1%e2%89%a527
<p>if : $$abc=8 :a,b,c\in \mathbb{R}_{&gt; 0}$$</p> <p>then :</p> <p>$$(a+1)(b+1)(c+1)\ge 27.$$</p> <p>My try :</p> <p>$$(a+1)(b+1)(c+1)=1+(a+b+c)+(ab+ac+bc)+abc$$</p> <p>$$(a+1)(b+1)(c+1)=1+(a+b+c)+(ab+ac+bc)+8, $$</p> <p>then?</p>
[ { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" }, { "answer_id": 2113801, "score": 8, "text": "<p>Using $\\bf{A.M\\geq G.M}$</p>\n\n<p>$$\\frac{a}{2}+\\frac{a}{2}+1\\geq 3\\left(\\frac{a^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>Similarly $$\\frac{b}{2}+\\frac{b}{2}+1\\geq 3\\left(\\frac{b^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>$$\\frac{c}{2}+\\frac{c}{2}+1\\geq 3\\left(\\frac{c^2}{4}\\right)^{\\frac{1}{3}}$$</p>\n\n<p>So $$(a+1)(b+1)(c+1)\\geq 27 \\left(\\frac{(abc)^2}{64}\\right)^{\\frac{1}{3}} = 27$$</p>\n\n<p>equality hold when $\\displaystyle a= b=c = 2$</p>\n" }, { "answer_id": 2113805, "score": 8, "text": "<p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$\nsince $$abc=8.$$ We get $$\\frac{a+b+c}{3}\\geq \\sqrt[3]{abc}=2$$ and $$\\frac{ab+bc+ca}{3}\\geq \\sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p>\n" }, { "answer_id": 2113820, "score": 4, "text": "<p>Here is a really hands-on way to proceed.</p>\n\n<p>Suppose $b\\gt c$, replace both with $B, C=\\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\\lt (\\sqrt b-\\sqrt c)^2=b+c-2\\sqrt {bc}$$ or $$B+C=2\\sqrt{bc}\\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\\lt b+c$ and $a(B+C)\\lt a(b+c)$ with the other terms unchanged.</p>\n\n<p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p>\n\n<p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p>\n" }, { "answer_id": 2113858, "score": 2, "text": "<p>By Holder $$\\prod\\limits_{cyc}(a+1)\\geq\\left(\\sqrt[3]{abc}+1\\right)^3=27$$</p>\n" } ]
Question: if : $abc=8 $ then : $(a+1)(b+1)(c+1)≥27$ Answer 1: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 2: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 3: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 4: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 5: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 6: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 7: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 8: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 9: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 10: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 11: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 12: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 13: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 14: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 15: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 16: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 17: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 18: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 19: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 20: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 21: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 22: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 23: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 24: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 25: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 26: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 27: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 28: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 29: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 30: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 31: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 32: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 33: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 34: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 35: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 36: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 37: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 38: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 39: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 40: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 41: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 42: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 43: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 44: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 45: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 46: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 47: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 48: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p> Answer 49: <p>Using $\bf{A.M\geq G.M}$</p> <p>$$\frac{a}{2}+\frac{a}{2}+1\geq 3\left(\frac{a^2}{4}\right)^{\frac{1}{3}}$$</p> <p>Similarly $$\frac{b}{2}+\frac{b}{2}+1\geq 3\left(\frac{b^2}{4}\right)^{\frac{1}{3}}$$</p> <p>$$\frac{c}{2}+\frac{c}{2}+1\geq 3\left(\frac{c^2}{4}\right)^{\frac{1}{3}}$$</p> <p>So $$(a+1)(b+1)(c+1)\geq 27 \left(\frac{(abc)^2}{64}\right)^{\frac{1}{3}} = 27$$</p> <p>equality hold when $\displaystyle a= b=c = 2$</p> Answer 50: <p>We have $$(a+1)(b+1)(c+1)=9+a+b+c+ab+bc+ca,$$ since $$abc=8.$$ We get $$\frac{a+b+c}{3}\geq \sqrt[3]{abc}=2$$ and $$\frac{ab+bc+ca}{3}\geq \sqrt[3]{(abc)^2}=4$$ Multplying by $3$ and adding up, we get the statement above.</p> Answer 51: <p>Here is a really hands-on way to proceed.</p> <p>Suppose $b\gt c$, replace both with $B, C=\sqrt {bc}$ which keeps the product the same, but makes the two equal. Now note that $$0\lt (\sqrt b-\sqrt c)^2=b+c-2\sqrt {bc}$$ or $$B+C=2\sqrt{bc}\lt b+c$$ so the sum is reduced, and looking at the components of $(a+1)(b+1)(c+1)$ we have $B+C\lt b+c$ and $a(B+C)\lt a(b+c)$ with the other terms unchanged.</p> <p>The replacement therefore reduces the product, which can be reduced, therefore, unless all the terms are equal. The minimum value therefore has $a=b=c$.</p> <p>Note this includes a hidden proof of the AM/GM inequality for two variables - the AM/GM inequality is the essential idea here (or this can be seen as a property of curvature - the sum being flat and the product being curved). Any proof will also use somewhere the fact that the numbers are stated to be positive (else set one of them equal to $-1$ ...). You should reflect on how the various proofs use this fact.</p> Answer 52: <p>By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$</p>
Roll an N sided die K times. Let S be the side that appeared most often. What is the expected number of times S appeared?
2,114,174
7
https://math.stackexchange.com/questions/2114174/roll-an-n-sided-die-k-times-let-s-be-the-side-that-appeared-most-often-what-is
<p>For example, consider a 6 sided die rolled 10 times. Based on the following monte-carlo simulation, I get that the side that appears most will appear 3.44 times on average.</p> <pre><code>n = 6 k = 10 samples = 10000 results = [] for _ in range(samples): counts = {s:0 for s in range(n)} for _ in range(k): s = randint(0, n-1) counts[s] += 1 results.append(max(counts.values())) print sum(results)/float(len(results)) </code></pre> <p>But I can't figure out how to get this in a closed form for any particular N and K.</p>
[ { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" }, { "answer_id": 2114256, "score": 2, "text": "<p>Not an answer, but it's worth trying some example with small $N$.</p>\n\n<p>Let $M$ be your number.</p>\n\n<p>For $n=2$, and any $k/2 &lt; m \\leq k$ you have $P(M=m)=\\frac{\\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\\frac{\\binom{k}{k/2}}{2^k}$.</p>\n\n<p>So, for $k=2j+1$ odd, you have:</p>\n\n<p>$$\\begin{align}E(M)&amp;=\\frac{1}{2^{2j}}\\sum_{m=j+1}^{2j+1} m\\binom{2j+1}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j+1}^{2j+1}\\binom{2j}{m-1}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\sum_{m=j}^{2j}\\binom{2j}{m}\\\\\n&amp;=\\frac{2j+1}{2^{2j}}\\left(\\frac{1}{2}2^{2j}+\\frac{1}{2}\\binom{2j}{j}\\right)\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k-1}{(k-1)/2}}{2^{k-1}}\\right)\n\\end{align}$$</p>\n\n<p>For $N=2$ and $k=2j$, you get: $$\n\\begin{align}E(M)&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{1}{2^{2j-1}}\\sum_{m=j+1}^{2j}m\\binom{2j}{m}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{2j}{2^{2j-1}}\\sum_{m=j+1}^{2j}\\binom{2j-1}{m-1}\\\\\n&amp;=j\\frac{\\binom{2j}{j}}{2^{2j}}+\\frac{j}{2^{2j-2}}\\cdot\\frac{2^{2j-1}}{2}\\\\\n&amp;=\\frac{k}{2}\\left(1+\\frac{\\binom{k}{k/2}}{2^k}\\right)\n\\end{align}$$</p>\n\n<p>It's gonna get messier when trying it for $N=3$.</p>\n" }, { "answer_id": 2116031, "score": 2, "text": "<p>Here is a closed form, we will need more sophisticated methods for the\nasymptotics.<P></p>\n\n<p>Following the notation introduced at this <a href=\"https://math.stackexchange.com/questions/2112055/\">MSE\nlink</a> we suppose\nthat the die has $m$ faces and is rolled $n$ times. Rolling the die\nwith the most occured value being $q$ and instances of this size being\nmarked yields the species</p>\n\n<p>$$\\mathfrak{S}_{=m}\n(\\mathfrak{P}_{=0}(\\mathcal{Z})\n+ \\mathfrak{P}_{=1}(\\mathcal{Z})\n+ \\cdots\n+ \\mathcal{V}\\mathfrak{P}_{=q}(\\mathcal{Z})).$$</p>\n\n<p>This has generating function</p>\n\n<p>$$G(z,v) =\n\\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!} + v\\frac{z^q}{q!}\\right)^m.$$</p>\n\n<p>Subtracting the values where sets of size $q$ did not occur we obtain\nthe generating function</p>\n\n<p>$$H_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m\n- \\left(\\sum_{r=0}^{q-1} \\frac{z^r}{r!}\\right)^m.$$</p>\n\n<p>This also follows more or less by inspection.<P></p>\n\n<p>We then obtain for the desired quantity the closed form</p>\n\n<p>$$\\bbox[5px,border:2px solid #00A000]{\n\\frac{n!}{m^n} \n[z^n] \\sum_{q=1}^n q H_q(z).}$$</p>\n\n<p>Introducing </p>\n\n<p>$$L_{q}(z) = \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^m$$</p>\n\n<p>we thus have</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \\sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p>\n\n<p>This is</p>\n\n<p>$$\\frac{n!}{m^n} [z^n] \n\\left(n L_n(z) - \\sum_{q=0}^{n-1} L_q(z)\\right).$$</p>\n\n<p>We also have for </p>\n\n<p>$$[z^n] L_q(z) =\n\\sum_{k=0}^{\\min(q, n)} \\frac{1}{k!} [z^{n-k}] \n\\left(\\sum_{r=0}^{q} \\frac{z^r}{r!}\\right)^{m-1}$$</p>\n\n<p>Furthermore we obtain for $m=1$ </p>\n\n<p>$$[z^n] L_q(z) =\n[[n \\le q]] \\times \\frac{1}{n!}.$$</p>\n\n<p>With these we can implement a recursion, which in fact on being coded\nproved inferior to Maple's fast polynomial multiplication routines. It\nis included here because it memoizes coefficients of $L_q(z)$, thereby\nproviding a dramatic speed-up of the plots at the cost of allocating\nmore memory.<P></p>\n\n<p>All of this yields the following graph where we have scaled the plot\nby a factor of $n/m.$ This is it for a six-sided die:</p>\n\n<pre>\n 3+ H \n + \n | H \n + H \n | \n + H \n + HH \n | HH \n 2+ HH \n | HHH \n + HHHH \n + HHHHHHH \n | HHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>And here is the plot for a twelve-sided die. (I consider it worth\nobserving that we have the <strong>exact</strong> value for the expectation in the\ncase of $120$ rolls of this die, a case count that has $130$ digits,\nsimilar to what appeared in the companion post.)</p>\n\n<pre>\n 8+ \n | \n + \n + \n | \n + \n + H \n | \n 6+ \n | \n + \n + \n | H \n + \n + H \n | \n 4+ HH \n + H \n | H \n + HH \n + HH \n | HHHH \n + HHHHHH \n 2+ HHHHHHHHHHH \n | HHHHHHHHHHHHHHHHHHHHHHHHH \n + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH \n -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- \n 0 20 40 60 80 100 120 \n</pre>\n\n<p>This was the Maple code.</p>\n\n<pre>\nwith(plots);\nwith(combinat);\n\nENUM :=\nproc(n, m)\noption remember;\nlocal rolls, res, ind, counts, least, most;\n res := 0;\n\n for ind from m^n to 2*m^n-1 do\n rolls := convert(ind, base, m);\n\n counts := map(mel-&gt;op(2, mel),\n convert(rolls[1..n], &#96;multiset&#96;));\n\n res := res + max(counts);\n od;\n\n res/m^n;\nend;\n\nL := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m;\n\nX :=\nproc(n, m)\n option remember;\n local H;\n\n H := q -&gt; expand(L(m,q)-L(m,q-1));\n\n n!/m^n*\n coeff(add(q*H(q), q=1..n), z, n);\nend;\n\nLCF :=\nproc(n,m,q)\n option remember;\n\n if n &lt; 0 then return 0 fi;\n\n if m = 1 then\n if n &lt;= q then return 1/n! fi;\n return 0;\n fi;\n\n add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n));\nend;\n\nLVERIF :=\n(m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m);\n\nXX :=\nproc(n, m)\noption remember;\nlocal res;\n\n res :=\n n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1);\n\n res*n!/m^n;\nend;\n\n\nDICEPLOT :=\nproc(nmx, m)\nlocal pts;\n\n pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)];\n pointplot(pts);\nend;\n</pre>\n" } ]
Question: Roll an N sided die K times. Let S be the side that appeared most often. What is the expected number of times S appeared? Answer 1: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 2: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 3: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 4: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 5: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 6: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 7: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 8: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 9: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 10: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 11: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 12: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 13: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 14: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 15: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 16: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 17: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 18: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 19: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 20: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 21: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 22: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 23: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 24: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre> Answer 25: <p>Not an answer, but it's worth trying some example with small $N$.</p> <p>Let $M$ be your number.</p> <p>For $n=2$, and any $k/2 &lt; m \leq k$ you have $P(M=m)=\frac{\binom{k}{m}}{2^{k-1}}$. If $k$ even, then $P(M=k/2)=\frac{\binom{k}{k/2}}{2^k}$.</p> <p>So, for $k=2j+1$ odd, you have:</p> <p>$$\begin{align}E(M)&amp;=\frac{1}{2^{2j}}\sum_{m=j+1}^{2j+1} m\binom{2j+1}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j+1}^{2j+1}\binom{2j}{m-1}\\ &amp;=\frac{2j+1}{2^{2j}}\sum_{m=j}^{2j}\binom{2j}{m}\\ &amp;=\frac{2j+1}{2^{2j}}\left(\frac{1}{2}2^{2j}+\frac{1}{2}\binom{2j}{j}\right)\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k-1}{(k-1)/2}}{2^{k-1}}\right) \end{align}$$</p> <p>For $N=2$ and $k=2j$, you get: $$ \begin{align}E(M)&amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{1}{2^{2j-1}}\sum_{m=j+1}^{2j}m\binom{2j}{m}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{2j}{2^{2j-1}}\sum_{m=j+1}^{2j}\binom{2j-1}{m-1}\\ &amp;=j\frac{\binom{2j}{j}}{2^{2j}}+\frac{j}{2^{2j-2}}\cdot\frac{2^{2j-1}}{2}\\ &amp;=\frac{k}{2}\left(1+\frac{\binom{k}{k/2}}{2^k}\right) \end{align}$$</p> <p>It's gonna get messier when trying it for $N=3$.</p> Answer 26: <p>Here is a closed form, we will need more sophisticated methods for the asymptotics.<P></p> <p>Following the notation introduced at this <a href="https://math.stackexchange.com/questions/2112055/">MSE link</a> we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species</p> <p>$$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$</p> <p>This has generating function</p> <p>$$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$</p> <p>Subtracting the values where sets of size $q$ did not occur we obtain the generating function</p> <p>$$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$</p> <p>This also follows more or less by inspection.<P></p> <p>We then obtain for the desired quantity the closed form</p> <p>$$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$</p> <p>Introducing </p> <p>$$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$</p> <p>we thus have</p> <p>$$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$</p> <p>This is</p> <p>$$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$</p> <p>We also have for </p> <p>$$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$</p> <p>Furthermore we obtain for $m=1$ </p> <p>$$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$</p> <p>With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory.<P></p> <p>All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die:</p> <pre> 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>And here is the plot for a twelve-sided die. (I consider it worth observing that we have the <strong>exact</strong> value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.)</p> <pre> 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 </pre> <p>This was the Maple code.</p> <pre> with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel-&gt;op(2, mel), convert(rolls[1..n], &#96;multiset&#96;)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -&gt; add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -&gt; expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n &lt; 0 then return 0 fi; if m = 1 then if n &lt;= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -&gt; add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end; </pre>
Is Lebesgue outer measure continuous from above for sets with finite Lebesgue outer measure
2,115,139
7
https://math.stackexchange.com/questions/2115139/is-lebesgue-outer-measure-continuous-from-above-for-sets-with-finite-lebesgue-ou
<p>Does there exist a decreasing sequence of sets ($A_{n+1}\subseteq A_n$) ; $\{A_n\}$ of real numbers such that </p> <p>$m^*(A_n)&lt;\infty , \forall n \in \mathbb N$ and $m^*(\cap_{n\in \mathbb N} A_n) &lt; \lim_{n\to \infty} m^*(A_n)$ ? Here $m^*$ is the Lebesgue outer measure on real line</p>
[ { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" }, { "answer_id": 2115165, "score": 8, "text": "<p>Let $V$ be the <a href=\"https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets\" rel=\"noreferrer\"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\\Bbb{R}/\\Bbb{Q}$. Also, let us enumerate $\\Bbb{Q} \\cap [-1, 1] = \\{q_1, q_2, \\cdots \\}$ and define $\\{V_n\\}$ by</p>\n\n<p>$$ V_n = q_n + V = \\{ v + q_n : v \\in V\\}. $$</p>\n\n<p>Here are some observations:</p>\n\n<ul>\n<li><p>Since the Lebesgue outer measure is translation-invariant, $\\{V_n\\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li>\n<li><p>For each $x \\in [0, 1]$ there is a unique $v \\in V$ such that $x - v \\in \\Bbb{Q}$. Since $x - v \\in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \\in V_n$.</p></li>\n<li><p>From the previous observation, we have</p>\n\n<p>$$ [0, 1] \\subseteq \\cup_{n=1}^{\\infty} V_n \\subseteq [-1, 2]. $$</p>\n\n<p>If $V$ were measurable, we would have had $1 \\leq \\infty \\cdot m(V) \\leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li>\n</ul>\n\n<p>Finally, define $\\{ A_n \\}$ by</p>\n\n<p>$$ A_n = \\cup_{k=n}^{\\infty} V_k. $$</p>\n\n<p>Then $\\{ A_n \\}$ decreases to $\\varnothing$, and $V_n \\subset A_n$ implies that $m^*(A_n) \\geq m^*(V_n) = m^*(V)$. Therefore</p>\n\n<p>$$ m^*(\\varnothing) = 0 &lt; m^*(V) \\leq \\lim_{n\\to\\infty} m^*(A_n). $$</p>\n" }, { "answer_id": 2116168, "score": 1, "text": "<p>An alternative approach:</p>\n\n<p>Let $B$ be a basis of $\\mathbb{R}$ over $\\mathbb{Q}$ such that $1\\in B$. Define $G=\\mathrm{span}_{\\mathbb{Q}}(B \\backslash \\{1\\})$. Then $\\{G+q | q\\in\\mathbb{Q}\\}$ forms countably many cosets of $G$ partitioning $\\mathbb{R}$. Let\n$$\nA_n=[0,1]\\cap\\left( \\cup \\{ G+q | q\\in\\mathbb{Q}, \\ q&gt;n \\}\\right).\n$$\nThen \n$$\n\\cap_{n=1}^{\\infty} A_n = \\emptyset \\ \\mathrm{and} \\ \\ m^*(A_n)=1 \\ \\ \\mathrm{for} \\ \\mathrm{all} \\ n\\in\\mathbb{N}.$$\nThe last statement follows from the answer to your other question: <a href=\"https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval\">$G$ be a non-measurable subgroup of $(\\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \\cap I)=m^*(I)$?</a></p>\n\n<p>In fact, $m^*([0,1]\\cap (G+q)) = m^*([0,1]\\cap G)= 1$ for all $q\\in\\mathbb{Q}$. </p>\n" } ]
Question: Is Lebesgue outer measure continuous from above for sets with finite Lebesgue outer measure Answer 1: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 2: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 3: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 4: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 5: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 6: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 7: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 8: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 9: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 10: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 11: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 12: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 13: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 14: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 15: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 16: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 17: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 18: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 19: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 20: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 21: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 22: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 23: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 24: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p> Answer 25: <p>Let $V$ be the <a href="https://en.wikipedia.org/wiki/Vitali_set#Measurable_sets" rel="noreferrer"><em>Vitali set</em></a> which is formed by chooseing a representative within $[0, 1]$ for each coset of $\Bbb{R}/\Bbb{Q}$. Also, let us enumerate $\Bbb{Q} \cap [-1, 1] = \{q_1, q_2, \cdots \}$ and define $\{V_n\}$ by</p> <p>$$ V_n = q_n + V = \{ v + q_n : v \in V\}. $$</p> <p>Here are some observations:</p> <ul> <li><p>Since the Lebesgue outer measure is translation-invariant, $\{V_n\}$ has the common outer measure $m^*(V_n) = m^*(V)$.</p></li> <li><p>For each $x \in [0, 1]$ there is a unique $v \in V$ such that $x - v \in \Bbb{Q}$. Since $x - v \in [-1, 1]$, this tells that $x - v = q_n$ for a unique $n$ and hence $x \in V_n$.</p></li> <li><p>From the previous observation, we have</p> <p>$$ [0, 1] \subseteq \cup_{n=1}^{\infty} V_n \subseteq [-1, 2]. $$</p> <p>If $V$ were measurable, we would have had $1 \leq \infty \cdot m(V) \leq 3$, which is impossible. Therefore $V$ is non-measurable. Also notice that this implies $m^*(V) &gt; 0$, for otherwise $V$ is a null-set and hence measurable.</p></li> </ul> <p>Finally, define $\{ A_n \}$ by</p> <p>$$ A_n = \cup_{k=n}^{\infty} V_k. $$</p> <p>Then $\{ A_n \}$ decreases to $\varnothing$, and $V_n \subset A_n$ implies that $m^*(A_n) \geq m^*(V_n) = m^*(V)$. Therefore</p> <p>$$ m^*(\varnothing) = 0 &lt; m^*(V) \leq \lim_{n\to\infty} m^*(A_n). $$</p> Answer 26: <p>An alternative approach:</p> <p>Let $B$ be a basis of $\mathbb{R}$ over $\mathbb{Q}$ such that $1\in B$. Define $G=\mathrm{span}_{\mathbb{Q}}(B \backslash \{1\})$. Then $\{G+q | q\in\mathbb{Q}\}$ forms countably many cosets of $G$ partitioning $\mathbb{R}$. Let $$ A_n=[0,1]\cap\left( \cup \{ G+q | q\in\mathbb{Q}, \ q&gt;n \}\right). $$ Then $$ \cap_{n=1}^{\infty} A_n = \emptyset \ \mathrm{and} \ \ m^*(A_n)=1 \ \ \mathrm{for} \ \mathrm{all} \ n\in\mathbb{N}.$$ The last statement follows from the answer to your other question: <a href="https://math.stackexchange.com/questions/2101953/g-be-a-non-measurable-subgroup-of-mathbb-r-i-be-a-bounded-interval">$G$ be a non-measurable subgroup of $(\mathbb R,+)$ ; $I$ be a bounded interval , then $m^*(G \cap I)=m^*(I)$?</a></p> <p>In fact, $m^*([0,1]\cap (G+q)) = m^*([0,1]\cap G)= 1$ for all $q\in\mathbb{Q}$. </p>
How to evaluate $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$
2,116,721
7
https://math.stackexchange.com/questions/2116721/how-to-evaluate-int-0-pi-cosx-cos2x-cos3x-cos4x-dx
<p>Is there an easy way to evaluate the integral $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$?</p> <p>I know that I can plugin the $e$-function and use the linearity of the integral. However this would lead to 16 summands which I really dont want to calculate separately.</p>
[ { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" }, { "answer_id": 2116801, "score": 14, "text": "<p>Because I like it, I will add a tricky approach (<span class=\"math-container\">$C$</span> denotes the unit circle):</p>\n\n<p><span class=\"math-container\">$$\nI=\\frac{1}{2}\\int_{-\\pi}^{\\pi}dx\\prod_{n=1}^4\\cos(nx)\\underbrace{=}_{z=e^{ix}}\\frac{1}{32i}\\oint_C\\frac{1}{z^{11}}\\prod_{n=1}^4(z^{2n}+1).\n$$</span></p>\n\n<p>Now, since <span class=\"math-container\">$\\oint_Cz^{n}=0$</span> for <span class=\"math-container\">$n\\in \\mathbb{Z}$</span> and <span class=\"math-container\">$n\\neq-1$</span>, only the terms of the product with total power of <span class=\"math-container\">$10$</span> will contribute. There are exactly two of them <span class=\"math-container\">$2+8=4+6=10$</span>, so</p>\n\n<blockquote>\n <p><span class=\"math-container\">$$\nI=\\frac{1}{32i}\\oint_C\\frac{2}{z}=\\frac{\\pi}8\n$$</span></p>\n</blockquote>\n\n<p>where the last equality results from the residue theorem.</p>\n\n<hr>\n\n<p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href=\"http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf\" rel=\"nofollow noreferrer\">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a>\nSo integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p>\n" }, { "answer_id": 2116727, "score": 12, "text": "<p><strong>HINT:</strong> We have the following identities </p>\n\n<p>$\\cos(A+ B) = \\cos A \\cos B - \\sin A \\sin B$ and </p>\n\n<p>$\\cos(A-B) = \\cos A \\cos B + \\sin A \\sin B$</p>\n\n<p>$2\\cos A \\cos B = \\cos(A+B) + \\cos (A-B)$</p>\n\n<p>$\\cos A \\cos B = \\dfrac{\\cos(A+B) + \\cos(A-B)}{2}$</p>\n\n<p>Take $\\cos x$ and $\\cos 4x$ together and $\\cos 2x$ and $\\cos 3x$ together.</p>\n\n<p>Then $\\cos(x) \\cos(2x) \\cos(3x) \\cos(4x) =\\\\ \\frac18[1 + \\cos(10x) + \\cos(8x)+ \\cos(6x)+2\\cos(4x)+2\\cos(2x)+\\cos(x) ]$.</p>\n\n<p>Now you can do with your usual integration formula.</p>\n" }, { "answer_id": 2117585, "score": 2, "text": "<p>Using <a href=\"http://mathworld.wolfram.com/WernerFormulas.html\" rel=\"nofollow noreferrer\">Werner's</a> formula with <span class=\"math-container\">$a\\ge b&gt;0$</span> so that <span class=\"math-container\">$a\\pm b$</span> are integers</p>\n<p><span class=\"math-container\">$$\\int_0^\\pi\\cos ax\\cos bx\\ dx=\\dfrac12\\int_0^\\pi\\{\\cos(a+b)x+\\cos(a-b)x\\} dx=\\cdots=\\begin{cases}0&amp;\\mbox{if } a\\ne b\\\\ \n\\dfrac\\pi2 &amp; \\mbox{if } a=b\\end{cases}$$</span></p>\n<p>Now, <span class=\"math-container\">$(2\\cos x\\cos4x)(2\\cos2x\\cos3x)=(\\cos3x+\\cos5x)(\\cos x+\\cos5x)$</span></p>\n<p><span class=\"math-container\">$=\\cos3x\\cos x+\\cos x\\cos5x+\\cos3x\\cos5x+\\cos5x\\cdot\\cos5x$</span></p>\n<p>So, <span class=\"math-container\">$\\displaystyle4\\int_0^\\pi\\cos x\\cos2x\\cos3x\\cos4x\\ dx=\\dfrac\\pi2$</span> for <span class=\"math-container\">$a=b=5$</span></p>\n" }, { "answer_id": 2116758, "score": 1, "text": "<p>There is a <a href=\"https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf\" rel=\"nofollow noreferrer\">well-known</a> identity which says</p>\n\n<p>$$\\cos A + \\cos B = 2\\cos\\left(\\frac{A-B}{2}\\right)\\cos\\left(\\frac{A+B}{2}\\right)$$</p>\n\n<p>If we put $\\frac{A-B}{2} = x$ and $\\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \\cos x \\cos 2x \\equiv \\frac{1}{2}(\\cos x+\\cos 3x) $$</p>\n\n<p>We can repeat this for $\\cos 3x$ and $\\cos 4x$. Solving $\\frac{A-B}{2} = 3x$ and $\\frac{A+B}{2}=4x$ gives $$\\cos 3x \\cos 4x \\equiv \\frac{1}{2}(\\cos x + \\cos 7x)$$\nPutting this together gives\n$$\\cos x \\cos 2x \\cos 3x \\cos 4x \\equiv \\frac{1}{4}(\\cos x+\\cos 3x)(\\cos x+\\cos 7x)$$</p>\n\n<p>Now, you need to expand these brackets and follow the same procedure to simplify $\\cos x \\cos x$, $\\cos x \\cos 7x$, $\\cos 3x \\cos x$ and $\\cos 3x \\cos 7x$.</p>\n" } ]
Question: How to evaluate $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$ Answer 1: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 2: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 3: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 4: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 5: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 6: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 7: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 8: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 9: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 10: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 11: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 12: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 13: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 14: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 15: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 16: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 17: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 18: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 19: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 20: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 21: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 22: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 23: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 24: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 25: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 26: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 27: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 28: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 29: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 30: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 31: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 32: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 33: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 34: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 35: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 36: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 37: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 38: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 39: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 40: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 41: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 42: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 43: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 44: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 45: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 46: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 47: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 48: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p> Answer 49: <p>Because I like it, I will add a tricky approach (<span class="math-container">$C$</span> denotes the unit circle):</p> <p><span class="math-container">$$ I=\frac{1}{2}\int_{-\pi}^{\pi}dx\prod_{n=1}^4\cos(nx)\underbrace{=}_{z=e^{ix}}\frac{1}{32i}\oint_C\frac{1}{z^{11}}\prod_{n=1}^4(z^{2n}+1). $$</span></p> <p>Now, since <span class="math-container">$\oint_Cz^{n}=0$</span> for <span class="math-container">$n\in \mathbb{Z}$</span> and <span class="math-container">$n\neq-1$</span>, only the terms of the product with total power of <span class="math-container">$10$</span> will contribute. There are exactly two of them <span class="math-container">$2+8=4+6=10$</span>, so</p> <blockquote> <p><span class="math-container">$$ I=\frac{1}{32i}\oint_C\frac{2}{z}=\frac{\pi}8 $$</span></p> </blockquote> <p>where the last equality results from the residue theorem.</p> <hr> <p>Fiddling around with generalizations of this result and consulting OEIS I stumbled over this interesting set of slides: <a href="http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf" rel="nofollow noreferrer">http://www.dorinandrica.ro/files/presentation-INTEGERS-2013.pdf</a> So integrals of this kind have a deep connection to problems in number theory which is pretty awesome</p> Answer 50: <p><strong>HINT:</strong> We have the following identities </p> <p>$\cos(A+ B) = \cos A \cos B - \sin A \sin B$ and </p> <p>$\cos(A-B) = \cos A \cos B + \sin A \sin B$</p> <p>$2\cos A \cos B = \cos(A+B) + \cos (A-B)$</p> <p>$\cos A \cos B = \dfrac{\cos(A+B) + \cos(A-B)}{2}$</p> <p>Take $\cos x$ and $\cos 4x$ together and $\cos 2x$ and $\cos 3x$ together.</p> <p>Then $\cos(x) \cos(2x) \cos(3x) \cos(4x) =\\ \frac18[1 + \cos(10x) + \cos(8x)+ \cos(6x)+2\cos(4x)+2\cos(2x)+\cos(x) ]$.</p> <p>Now you can do with your usual integration formula.</p> Answer 51: <p>Using <a href="http://mathworld.wolfram.com/WernerFormulas.html" rel="nofollow noreferrer">Werner's</a> formula with <span class="math-container">$a\ge b&gt;0$</span> so that <span class="math-container">$a\pm b$</span> are integers</p> <p><span class="math-container">$$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&amp;\mbox{if } a\ne b\\ \dfrac\pi2 &amp; \mbox{if } a=b\end{cases}$$</span></p> <p>Now, <span class="math-container">$(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$</span></p> <p><span class="math-container">$=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$</span></p> <p>So, <span class="math-container">$\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$</span> for <span class="math-container">$a=b=5$</span></p> Answer 52: <p>There is a <a href="https://qualifications.pearson.com/content/dam/pdf/A%20Level/Mathematics/2013/Specification%20and%20sample%20assessments/N38210A-GCE-Mathematical-Formulae-Statistical-Tables.pdf" rel="nofollow noreferrer">well-known</a> identity which says</p> <p>$$\cos A + \cos B = 2\cos\left(\frac{A-B}{2}\right)\cos\left(\frac{A+B}{2}\right)$$</p> <p>If we put $\frac{A-B}{2} = x$ and $\frac{A+B}{2}=2x$ then we get $A=3x$ and $B=x$, so $$ \cos x \cos 2x \equiv \frac{1}{2}(\cos x+\cos 3x) $$</p> <p>We can repeat this for $\cos 3x$ and $\cos 4x$. Solving $\frac{A-B}{2} = 3x$ and $\frac{A+B}{2}=4x$ gives $$\cos 3x \cos 4x \equiv \frac{1}{2}(\cos x + \cos 7x)$$ Putting this together gives $$\cos x \cos 2x \cos 3x \cos 4x \equiv \frac{1}{4}(\cos x+\cos 3x)(\cos x+\cos 7x)$$</p> <p>Now, you need to expand these brackets and follow the same procedure to simplify $\cos x \cos x$, $\cos x \cos 7x$, $\cos 3x \cos x$ and $\cos 3x \cos 7x$.</p>
Prob. 17, Chap. 3 in Baby Rudin: For $\alpha &gt; 1$, how to obtain these inequalities from this recurrence relation?
2,079,180
6
https://math.stackexchange.com/questions/2079180/prob-17-chap-3-in-baby-rudin-for-alpha-1-how-to-obtain-these-inequalit
<p>Here's Prob. 17, Chap. 3 in the book <em>Principles of Mathematical Analysis</em> by Walter Rudin, 3rd edition: </p> <blockquote> <p>Fix $\alpha &gt; 1$. Take $x_1 &gt; \sqrt{\alpha}$, and define $$x_{n+1} = \frac{\alpha + x_n}{1+x_n} = x_n + \frac{\alpha - x_n^2}{1+x_n}.$$ </p> <p>(a) Prove that $x_1 &gt; x_3 &gt; x_5 &gt; \cdots$. </p> <p>(b) Prove that $x_2 &lt; x_4 &lt; x_6 &lt; \cdots$. </p> <p>(c) Prove that $\lim x_n = \sqrt{\alpha}$. </p> </blockquote> <p>My effort: </p> <p>From the recursion formula, we can obtain $$ \begin{align} x_{n+1} &amp;= \frac{ \alpha + x_n}{1+ x_n} \\ &amp;= \frac{ \alpha + \frac{\alpha + x_{n-1}}{1+x_{n-1}} }{ 1 + \frac{\alpha + x_{n-1}}{1+x_{n-1}} } \\ &amp;= \frac{ (\alpha + 1) x_{n-1} + 2 \alpha }{ 2x_{n-1} + ( 1 + \alpha ) } \\ &amp;= \frac{\alpha+1}{2} + \frac{2 \alpha - \frac{(\alpha+1)^2}{2} }{2x_{n-1} + ( 1 + \alpha ) } \\ &amp;= \frac{\alpha+1}{2} + \frac{ \alpha - \frac{\alpha^2+1 }{2} }{2x_{n-1} + ( 1 + \alpha ) }. \end{align} $$</p> <p>What next? </p>
[ { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" }, { "answer_id": 2079238, "score": 6, "text": "<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} = 1 + \\frac{\\alpha-1}{1+x_n}. $$\nSince $\\alpha-1&gt;0$, we have that if $x_n&lt;\\sqrt{\\alpha}$, then\n$$ x_{n+1} &gt; 1 + \\frac{\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{1+\\sqrt{\\alpha}+\\alpha-1}{1+\\sqrt{\\alpha}} = \\frac{\\sqrt{\\alpha}+\\alpha}{1+\\sqrt{\\alpha}} = \\sqrt{\\alpha}$$\nand similarly, if $x_n &gt; \\sqrt{\\alpha}$, then $x_{n+1} &lt; \\sqrt{\\alpha}$. Since $x_1&gt;\\sqrt{\\alpha}$, it follows from induction that $x_n&lt;\\sqrt{\\alpha}$ for $n$ even and $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p>\n\n<p>Notice that\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n}\\implies x_{n+1}(1+x_n) = \\alpha+x_n \\implies x_nx_{n+1} = \\alpha - (x_{n+1}-x_n) $$\nand hence\n$$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$\nIt is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p>\n\n<p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \\dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \\dots$.</p>\n\n<p>This proves (a) and (b).</p>\n\n<p>Now, since $x_n&gt;\\sqrt{\\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\\dots$, it follows that the subsequence of odd terms $\\{x_{2n-1}\\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\\{x_{2n}\\}$ is monotonically increasing and bounded from above (namely by $\\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\\ge\\sqrt{\\alpha}$ and $M\\le\\sqrt{\\alpha}$. From the equation\n$$ x_{n+1} = \\frac{\\alpha+x_n}{1+x_n} $$\nif we consider $n$ odd and take limits on both sides, we obtain\n$$ M = \\frac{\\alpha+L}{1+L} $$\nwhile if we consider $n$ even and take limits, we obtain\n$$ L = \\frac{\\alpha+M}{1+M}. $$\nThus, if we define the sequence $\\{y_n\\}$ by $y_1 = L$ and $y_{n+1} = \\frac{\\alpha+y_n}{1+y_n}$, then the sequence $\\{y_n\\}$ is just $\\{L,M,L,M,\\dots\\}$. If $L&gt;\\sqrt{\\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \\dots$, which is impossible since all odd terms are $L$. Since $L\\ge\\sqrt{\\alpha}$, it must follow that $L = \\sqrt{\\alpha}$, and you can easily check that this forces $M = \\sqrt{\\alpha}$ as well. Thus, the odd and even subsequential limits are both $\\sqrt{\\alpha}$, so the limit of the sequence is $\\sqrt{\\alpha}$ as well.</p>\n" }, { "answer_id": 2079241, "score": 1, "text": "<p>$$\\text {Let } a_n=(\\sqrt \\alpha)\\tan (d_n+\\pi /4) \\text { with } d_n\\in (0,\\pi /4).$$ Using $\\tan (x +\\pi /4)=(1+\\tan x)/(1-\\tan x)$ when $\\tan x \\ne 1,$ we arrive, after some calculation, that $$\\tan d_{n+1}=k \\tan d_n$$ $$ \\text { where } k=\\frac {1-\\sqrt {\\alpha} }{1+\\sqrt {\\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\\alpha &gt;1.$</p>\n\n<p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\\sqrt {\\alpha} \\ne b_1&gt;0$ and $b_{n+1}=(b_n+\\alpha /b_n)/2,$ then $b_n&gt;\\sqrt {\\alpha}$ for $n\\geq 2.$ Let $b_n=\\sqrt {\\alpha} \\coth c_n$ for $n\\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\\geq 2.$ In this Q I tried $\\tan$ because $\\tan d_n$ alternates $\\pm. $</p>\n" } ]
Question: Prob. 17, Chap. 3 in Baby Rudin: For $\alpha &gt; 1$, how to obtain these inequalities from this recurrence relation? Answer 1: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 2: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 3: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 4: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 5: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 6: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 7: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 8: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 9: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 10: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 11: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 12: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 13: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 14: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 15: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 16: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 17: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 18: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 19: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 20: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 21: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 22: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 23: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 24: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p> Answer 25: <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} = 1 + \frac{\alpha-1}{1+x_n}. $$ Since $\alpha-1&gt;0$, we have that if $x_n&lt;\sqrt{\alpha}$, then $$ x_{n+1} &gt; 1 + \frac{\alpha-1}{1+\sqrt{\alpha}} = \frac{1+\sqrt{\alpha}+\alpha-1}{1+\sqrt{\alpha}} = \frac{\sqrt{\alpha}+\alpha}{1+\sqrt{\alpha}} = \sqrt{\alpha}$$ and similarly, if $x_n &gt; \sqrt{\alpha}$, then $x_{n+1} &lt; \sqrt{\alpha}$. Since $x_1&gt;\sqrt{\alpha}$, it follows from induction that $x_n&lt;\sqrt{\alpha}$ for $n$ even and $x_n&gt;\sqrt{\alpha}$ for $n$ odd. In particular, $x_{n+1}-x_n &gt; 0 $ if $n$ is even, and $x_{n+1}-x_n &lt; 0 $ if $n$ is odd.</p> <p>Notice that $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n}\implies x_{n+1}(1+x_n) = \alpha+x_n \implies x_nx_{n+1} = \alpha - (x_{n+1}-x_n) $$ and hence $$ x_n(x_{n+1}-x_{n-1}) = (x_n-x_{n-1}) - (x_{n+1}-x_n). $$ It is clear that $x_n&gt;0$ for all $n$, so we see that if $n$ is odd, then $x_n-x_{n-1}&gt;0$ and $x_{n+1}-x_n&lt;0$, so $x_{n+1}-x_{n-1} &gt; 0$, while if $n$ is even, then $x_n-x_{n-1}&lt;0$ and $x_{n+1}-x_n&gt;0$, so $x_{n+1}-x_{n-1} &lt;0$.</p> <p>Thus, $x_3-x_1&lt;0$, $x_5-x_3&lt;0$, and so on, so $x_1 &gt; x_3 &gt; x_5 &gt; \dots$, while $x_4-x_2&gt;0$, $x_6-x_4&gt;0$, and so on, so $x_2 &lt; x_4 &lt; x_6 &lt; \dots$.</p> <p>This proves (a) and (b).</p> <p>Now, since $x_n&gt;\sqrt{\alpha}$ for $n$ odd, and $x_1&gt;x_3&gt;x_5&gt;\dots$, it follows that the subsequence of odd terms $\{x_{2n-1}\}$ is monotonically decreasing and bounded from below, and hence has a limit (say $L$). Similarly, the subsequence of even terms $\{x_{2n}\}$ is monotonically increasing and bounded from above (namely by $\sqrt{a}$), and so it has a limit as well (say $M$). These limits must satisfy $L\ge\sqrt{\alpha}$ and $M\le\sqrt{\alpha}$. From the equation $$ x_{n+1} = \frac{\alpha+x_n}{1+x_n} $$ if we consider $n$ odd and take limits on both sides, we obtain $$ M = \frac{\alpha+L}{1+L} $$ while if we consider $n$ even and take limits, we obtain $$ L = \frac{\alpha+M}{1+M}. $$ Thus, if we define the sequence $\{y_n\}$ by $y_1 = L$ and $y_{n+1} = \frac{\alpha+y_n}{1+y_n}$, then the sequence $\{y_n\}$ is just $\{L,M,L,M,\dots\}$. If $L&gt;\sqrt{\alpha}$, then we can actually apply what we proved in part (a) to conclude that $y_1 &gt; y_3 &gt; y_5 &gt; \dots$, which is impossible since all odd terms are $L$. Since $L\ge\sqrt{\alpha}$, it must follow that $L = \sqrt{\alpha}$, and you can easily check that this forces $M = \sqrt{\alpha}$ as well. Thus, the odd and even subsequential limits are both $\sqrt{\alpha}$, so the limit of the sequence is $\sqrt{\alpha}$ as well.</p> Answer 26: <p>$$\text {Let } a_n=(\sqrt \alpha)\tan (d_n+\pi /4) \text { with } d_n\in (0,\pi /4).$$ Using $\tan (x +\pi /4)=(1+\tan x)/(1-\tan x)$ when $\tan x \ne 1,$ we arrive, after some calculation, that $$\tan d_{n+1}=k \tan d_n$$ $$ \text { where } k=\frac {1-\sqrt {\alpha} }{1+\sqrt {\alpha}}.$$ Observe that $0&gt;k&gt;-1$ because $\alpha &gt;1.$</p> <p>Remark. I tried this substitution by heuristic analogy with a substitution for Heron's method: If $\sqrt {\alpha} \ne b_1&gt;0$ and $b_{n+1}=(b_n+\alpha /b_n)/2,$ then $b_n&gt;\sqrt {\alpha}$ for $n\geq 2.$ Let $b_n=\sqrt {\alpha} \coth c_n$ for $n\geq 2$, with $c_2&gt;0.$ Then $c_n=2^{n-2}c_2$ for $n\geq 2.$ In this Q I tried $\tan$ because $\tan d_n$ alternates $\pm. $</p>
Writing the equation of a perpendicular bisector
2,079,324
6
https://math.stackexchange.com/questions/2079324/writing-the-equation-of-a-perpendicular-bisector
<p>Write the equation of the perpendicular bisector of the line segment between the points $(1,-2)$ and $( -1,-2)$.</p> <p>What I have worked out so far:</p> <p>The first part is $m = \dfrac{y_2-y_1}{x_2-x_1}$</p> <p>$$m = \frac{-2-(-2)}{-1-1} = \frac{0}{-2}$$</p> <p>$$m_{perpendicular} = \frac{2}{0}$$ </p> <p>Is $\frac{2}{0}$ correct? (But I can't divide by zero??) Where do I go from here?</p>
[ { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" }, { "answer_id": 2079340, "score": 5, "text": "<p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p>\n\n<p>For a rigorous solution, see the following:</p>\n\n<p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\\left ( \\frac{1 + (-1)}{2}, \\frac{-2 + (-2)}{2} \\right ) = (0, -2)$. Now this line segment has a slope of $\\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p>\n\n<p>$$\n(x - 0) = (\\cot 90°)(y - (-2)) \\\\\n\\implies x = 0\n$$</p>\n\n<p>Done.</p>\n" }, { "answer_id": 2079662, "score": 3, "text": "<p>I'll approach this as a more general problem,\nusing techniques that are probably beyond what this question was\nexpecting the solution to use. Consider this an answer for future reference.</p>\n\n<hr>\n\n<p><strong>Problem:</strong></p>\n\n<p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the\nperpendicular bisector of the segment $AB.$</p>\n\n<p><strong>Solution:</strong></p>\n\n<p>Define vectors \n$\\newcommand{\\a}{\\mathbf a}\\a$ and $\\newcommand{\\b}{\\mathbf b}\\b$ \nequal to the displacements of $A$ and $B$ from the origin:\n$$\n\\a = \\begin{pmatrix} x_A \\\\ y_A \\end{pmatrix}, \\qquad\n\\b = \\begin{pmatrix} x_B \\\\ y_B \\end{pmatrix}.\n$$\nNow let $\\newcommand{\\v}{\\mathbf v}\\v = \\b - \\a.$\nThen the equation \n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} = c, \\tag1\n$$\nwhere $\\v\\cdot\\mathbf u$ is the inner product (vector \"dot\" product)\nof $\\v$ and $\\mathbf u$,\nis the equation of a line perpendicular to $\\v,$\nand therefore perpendicular to the segment $AB.$\nThe constant $c$ determines which member of that family of lines\nthe equation describes.</p>\n\n<p>We want a line perpendicular to $AB$ that passes through the midpoint\nof the segment $AB,$ that is, the point \n$\\newcommand{\\xC}{\\frac{x_A+x_B}{2}}\\newcommand{\\yC}{\\frac{y_A+y_B}{2}}\n\\left(\\xC, \\yC\\right).$\nIn order for this point to be on the line described by Equation $1,$\nit must be true that\n$$\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix} = c.\n$$\nWe can use this fact to substitute for $c$ in Equation $1,$ with the result\n$$\n\\v \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix} =\n\\v \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix}.\n$$\nThis is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a \"simpler\"\nform of the equation is desired. </p>\n\n<p>Such a form can be obtained using the fact that\n$$\n\\v = \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}.\n$$</p>\n\n<p>Making this substitution for $\\v,$ multiplying term-by-term to evaluate\nthe inner product, and simplifying algebraically, we have\n\\begin{align}\n\\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix} x \\\\ y \\end{pmatrix}\n &amp;= \\begin{pmatrix} x_B - x_A \\\\ y_B - y_A \\end{pmatrix}\n \\cdot \\begin{pmatrix}\\xC \\\\ \\yC\\end{pmatrix},\\\\\n(x_B - x_A)x + (y_B - y_A)y\n &amp;= (x_B - x_A)\\left(\\xC\\right) + (y_B - y_A)\\left(\\yC\\right),\\\\\n(x_B - x_A)x + (y_B - y_A)y &amp;= \\frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2).\n\\end{align}\nFor any given points $A$ and $B$ we can then substitute the given values\nof $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the\nequation of a line in the simple format $px + qy = k$ for constants\n$p,$ $q,$ and $k.$\nThe advantage of this approach over most others is that it has no special\ncases to watch out for (such as when the segment $AB$ is horizontal\nor vertical); it works exactly the same for every pair of points\n$A$ and $B,$ requiring only that they be distinct points.\nBut if you must have the equation in a different format\n(such as, \"$y=mx+b$ unless the line is vertical, in which case write $x=k$\"),\nit is easy to convert the equation above into the desired format.</p>\n\n<p>In the particular instance given in the question,\n$x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$\nand the equation of the line simplifies to\n$$\n-2x + 0y = 0,\n$$\nor even more simply,\n$$\nx = 0.\n$$</p>\n" }, { "answer_id": 2079331, "score": 2, "text": "<p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p>\n" }, { "answer_id": 2079717, "score": 2, "text": "<p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\\implies x=c$</p>\n\n<p>And it passes through $(0,-2)$[midpoint]</p>\n\n<p>Hence the perpendicular bisector equation is $x=0$</p>\n" }, { "answer_id": 2079328, "score": 1, "text": "<p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p>\n\n<p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\\frac{-2-(-2)}{2}=0$ </p>\n\n<p>Let, $m'$ be the slope of perpendicular bisector, then $m\\times m'=-1$ or $m'=$ not defined.</p>\n\n<p>Finding the coordinates of mid points of the line as $x=\\frac{1-1}{2}=0$ and $y=\\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p>\n\n<p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p>\n\n<p>So, $\\frac{y-y_1}{x-x_1}=m'\\implies \\frac{y+2}{x-0}=$ not defined $\\implies x=0$.</p>\n\n<p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p>\n\n<p><a href=\"https://www.desmos.com/calculator/2u6xyye8mv\" rel=\"nofollow noreferrer\">Here is graph of your problem:</a></p>\n" }, { "answer_id": 2079550, "score": 1, "text": "<p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p>\n\n<p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p>\n\n<p>$$ \\iff x = 0$$</p>\n" }, { "answer_id": 2079431, "score": 0, "text": "<p>To find the required equation follow procedure mentioned below.</p>\n\n<p><em>1. Find Mid-Point of the given two points</em>. ;</p>\n\n<p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p>\n\n<p><em>3. In this case k = 0</em></p>\n\n<p>Therefore, <strong>x = 0</strong> is the required equation.</p>\n" }, { "answer_id": 2079678, "score": 0, "text": "<p>Your approach to link to slope is incorrect.</p>\n\n<p>Equal distance property is to be used.</p>\n\n<p>$$ \\sqrt{(x-1)^2 + (y+2)^2} = \\sqrt{(x+1)^2 + (y+2)^2} $$</p>\n\n<p>Simplifying you get $x=0$- or $y-$ axis.</p>\n\n<p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p>\n" }, { "answer_id": 2080008, "score": 0, "text": "<p>A couple of important details:</p>\n<ol>\n<li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class=\"math-container\">$(1,-2)$</span> and <span class=\"math-container\">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li>\n<li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li>\n</ol>\n<p>So now let's figure out what the segment going from <span class=\"math-container\">$(1,-2)$</span> to <span class=\"math-container\">$(-1,-2)$</span> actually looks like. The two points have the same <span class=\"math-container\">$y$</span>-coordinate, which means the segment is horizontal; see below.\n<a href=\"https://i.stack.imgur.com/ujNFY.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/ujNFY.png\" alt=\"enter image description here\" /></a></p>\n<p>You can probably tell directly from the graph that the midpoint is at <span class=\"math-container\">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p>\n<p>So now you want a vertical line that goes through <span class=\"math-container\">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class=\"math-container\">$x$</span>-coordinate is <span class=\"math-container\">$0$</span>. In other words, the equation of the perpendicular bisector is <span class=\"math-container\">$x=0$</span> -- which is actually the <span class=\"math-container\">$y$</span>-axis itself.</p>\n" } ]
Question: Writing the equation of a perpendicular bisector Answer 1: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 2: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 3: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 4: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 5: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 6: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 7: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 8: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 9: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 10: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 11: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 12: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 13: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 14: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 15: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 16: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 17: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 18: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 19: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 20: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 21: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 22: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 23: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 24: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 25: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 26: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 27: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 28: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 29: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 30: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 31: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 32: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 33: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 34: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 35: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 36: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 37: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 38: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 39: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 40: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 41: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 42: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 43: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 44: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 45: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 46: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 47: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 48: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 49: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 50: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 51: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 52: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 53: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 54: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 55: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 56: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 57: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 58: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 59: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 60: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 61: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 62: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 63: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 64: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 65: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 66: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 67: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 68: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 69: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 70: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 71: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 72: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 73: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 74: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 75: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 76: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 77: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 78: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 79: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 80: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 81: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 82: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 83: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 84: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 85: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 86: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 87: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 88: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 89: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 90: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 91: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 92: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 93: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 94: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 95: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 96: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 97: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 98: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 99: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 100: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 101: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 102: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 103: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 104: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 105: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 106: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 107: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 108: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p> Answer 109: <p>While solving problems on coordinate geometry, it often helps to draw the diagram. If you draw the diagram, you'll notice that the perpendicular bisector of the given line is the y-axis, i.e. $x = 0$.</p> <p>For a rigorous solution, see the following:</p> <p>We use the section formula to find out the mid-point of the line segment joining $(1, -2)$ and $(-1, -2)$. Thus the midpoint is $\left ( \frac{1 + (-1)}{2}, \frac{-2 + (-2)}{2} \right ) = (0, -2)$. Now this line segment has a slope of $\tan 0 = 0$. So it's perpendicular bisector must have a slope of $\tan 90°$ meaning it is vertical. So, write the equation of the line as:</p> <p>$$ (x - 0) = (\cot 90°)(y - (-2)) \\ \implies x = 0 $$</p> <p>Done.</p> Answer 110: <p>I'll approach this as a more general problem, using techniques that are probably beyond what this question was expecting the solution to use. Consider this an answer for future reference.</p> <hr> <p><strong>Problem:</strong></p> <p>Given $A=(x_A,y_A)$ and $B=(x_B,y_B),$ find an equation of the perpendicular bisector of the segment $AB.$</p> <p><strong>Solution:</strong></p> <p>Define vectors $\newcommand{\a}{\mathbf a}\a$ and $\newcommand{\b}{\mathbf b}\b$ equal to the displacements of $A$ and $B$ from the origin: $$ \a = \begin{pmatrix} x_A \\ y_A \end{pmatrix}, \qquad \b = \begin{pmatrix} x_B \\ y_B \end{pmatrix}. $$ Now let $\newcommand{\v}{\mathbf v}\v = \b - \a.$ Then the equation $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = c, \tag1 $$ where $\v\cdot\mathbf u$ is the inner product (vector "dot" product) of $\v$ and $\mathbf u$, is the equation of a line perpendicular to $\v,$ and therefore perpendicular to the segment $AB.$ The constant $c$ determines which member of that family of lines the equation describes.</p> <p>We want a line perpendicular to $AB$ that passes through the midpoint of the segment $AB,$ that is, the point $\newcommand{\xC}{\frac{x_A+x_B}{2}}\newcommand{\yC}{\frac{y_A+y_B}{2}} \left(\xC, \yC\right).$ In order for this point to be on the line described by Equation $1,$ it must be true that $$ \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix} = c. $$ We can use this fact to substitute for $c$ in Equation $1,$ with the result $$ \v \cdot \begin{pmatrix} x \\ y \end{pmatrix} = \v \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix}. $$ This is a perfectly valid equation of the perpendicular bisector of segment $AB.$ In a question such as posed here, however, no doubt a "simpler" form of the equation is desired. </p> <p>Such a form can be obtained using the fact that $$ \v = \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix}. $$</p> <p>Making this substitution for $\v,$ multiplying term-by-term to evaluate the inner product, and simplifying algebraically, we have \begin{align} \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix} x \\ y \end{pmatrix} &amp;= \begin{pmatrix} x_B - x_A \\ y_B - y_A \end{pmatrix} \cdot \begin{pmatrix}\xC \\ \yC\end{pmatrix},\\ (x_B - x_A)x + (y_B - y_A)y &amp;= (x_B - x_A)\left(\xC\right) + (y_B - y_A)\left(\yC\right),\\ (x_B - x_A)x + (y_B - y_A)y &amp;= \frac12(x_B^2 - x_A^2 + y_B^2 - y_A^2). \end{align} For any given points $A$ and $B$ we can then substitute the given values of $x_A,$ $y_A,$ $x_B,$ and $y_B$ into the equation to obtain the equation of a line in the simple format $px + qy = k$ for constants $p,$ $q,$ and $k.$ The advantage of this approach over most others is that it has no special cases to watch out for (such as when the segment $AB$ is horizontal or vertical); it works exactly the same for every pair of points $A$ and $B,$ requiring only that they be distinct points. But if you must have the equation in a different format (such as, "$y=mx+b$ unless the line is vertical, in which case write $x=k$"), it is easy to convert the equation above into the desired format.</p> <p>In the particular instance given in the question, $x_A=1,$ $y_A=-2,$ $x_B=-1,$ and $y_B=-2,$ and the equation of the line simplifies to $$ -2x + 0y = 0, $$ or even more simply, $$ x = 0. $$</p> Answer 111: <p>the line through the points $P_1(1,-2)$ and $P_2(-1,-2)$ has the equation $y=-2$ and the perpendicular bisector of these two points is the $y$-axes.</p> Answer 112: <p>Notice that the line passing through $(-1,-2)$ and $(1,-2)$ is parallel to x-axis. So, the perpendicular bisector is parallel to y-axis.$\implies x=c$</p> <p>And it passes through $(0,-2)$[midpoint]</p> <p>Hence the perpendicular bisector equation is $x=0$</p> Answer 113: <p>Since you know coordinates of two points of the line $(-1,-2), (1,-2)$, assume that equation the line is of the form $y=mx+c$.</p> <p>Now putting $(x,y)=(1,-2),(-1,-2)$, will give you the value of $m$ as $m=\frac{-2-(-2)}{2}=0$ </p> <p>Let, $m'$ be the slope of perpendicular bisector, then $m\times m'=-1$ or $m'=$ not defined.</p> <p>Finding the coordinates of mid points of the line as $x=\frac{1-1}{2}=0$ and $y=\frac{-2-2}{2}=-2$ or $(x,y)=(0,-2)$.</p> <p>Now, you have slope for the perpendicular and also coordinates of one of its points. </p> <p>So, $\frac{y-y_1}{x-x_1}=m'\implies \frac{y+2}{x-0}=$ not defined $\implies x=0$.</p> <p>So, equation of perpendicular bisector is $x=0$ or the perpendicular bisector of the given line is $y-$axis.</p> <p><a href="https://www.desmos.com/calculator/2u6xyye8mv" rel="nofollow noreferrer">Here is graph of your problem:</a></p> Answer 114: <p>The perpendicular bisector of $A$ and $B$ consists of all points an equal squared-distance away from $A$ and $B$, so:</p> <p>$$ (x-1)^2 + {(y+2)^2} = (x+1)^2 + {(y+2)^2}$$</p> <p>$$ \iff x = 0$$</p> Answer 115: <p>To find the required equation follow procedure mentioned below.</p> <p><em>1. Find Mid-Point of the given two points</em>. ;</p> <p><em>2. Mid-Point is (0,-2). Given line is parallel to the x-axis therefore, any line perpendicular to it will be of the form x = k where k is constant. Because value of abscissa must be constant for a line to be parallel to y-axis and hence, perpendicular to x-axis.</em></p> <p><em>3. In this case k = 0</em></p> <p>Therefore, <strong>x = 0</strong> is the required equation.</p> Answer 116: <p>Your approach to link to slope is incorrect.</p> <p>Equal distance property is to be used.</p> <p>$$ \sqrt{(x-1)^2 + (y+2)^2} = \sqrt{(x+1)^2 + (y+2)^2} $$</p> <p>Simplifying you get $x=0$- or $y-$ axis.</p> <p>To verify it, note that between the two points only $x$ coordinate sign has changed. So the locus is halfway in between, that is, the $y-$ axis.</p> Answer 117: <p>A couple of important details:</p> <ol> <li>A <em>line</em> does not have a perpendicular <strong>bisector</strong>, because a line has no midpoint. Presumably you actually want to find the perpendcular bisector of the <strong>segment</strong> that has endpoints at <span class="math-container">$(1,-2)$</span> and <span class="math-container">$(-1, -2)$</span>. In what follows, I assume that is what the question actually means.</li> <li>You seem to be using the fact that the slopes of perpendicular lines are &quot;opposite reciprocals&quot;. A lot of students (and even a lot of teachers!) forget this is only true for lines that are neither vertical nor horizontal. If a line is vertical, then any line perpendicular to it is horizontal, and vice versa.</li> </ol> <p>So now let's figure out what the segment going from <span class="math-container">$(1,-2)$</span> to <span class="math-container">$(-1,-2)$</span> actually looks like. The two points have the same <span class="math-container">$y$</span>-coordinate, which means the segment is horizontal; see below. <a href="https://i.stack.imgur.com/ujNFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ujNFY.png" alt="enter image description here" /></a></p> <p>You can probably tell directly from the graph that the midpoint is at <span class="math-container">$(0,-2)$</span>, but if that isn't obvious (or you want something more computational to be convinced) you can use the midpoint formula.</p> <p>So now you want a vertical line that goes through <span class="math-container">$(0,-2)$</span>. What line is that? It's the line made up of all points whose <span class="math-container">$x$</span>-coordinate is <span class="math-container">$0$</span>. In other words, the equation of the perpendicular bisector is <span class="math-container">$x=0$</span> -- which is actually the <span class="math-container">$y$</span>-axis itself.</p>
Find all real solutions to the equation$4^x+6^{x^2}=5^x+5^{x^2}$
2,079,546
6
https://math.stackexchange.com/questions/2079546/find-all-real-solutions-to-the-equation4x6x2-5x5x2
<p>Find all real solutions to the equation:$$4^x+6^{x^2}=5^x+5^{x^2}$$</p> <p>Obviously $0$ and $1$ satisfy this equation, but here $x$ is a real number, and I do not know how to do the next step, in fact, I guess this equation is not in addition to $0$ and $1$ other than the root. But you know, I need a proof! Could anyone help me? Thanks a lot!</p>
[ { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" }, { "answer_id": 2079587, "score": 8, "text": "<p>For any $x \\in \\mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$,\nwe find a pair of numbers $\\xi \\in (5,6)$ and $\\eta \\in (4,5)$ (both dependent on $x$) such that</p>\n\n<p>$$6^{x^2} - 5^{x^2} = x^2 \\xi^{x^2-1}\\quad\\text{ and }\\quad\n5^x - 4^x = x\\eta^{x-1}$$\nThis implies\n$$\n{\\tt LHS}-{\\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\\xi^{x^2-1} - x\\eta^{x-1}\n$$</p>\n\n<p>There are 3 cases we need to study:</p>\n\n<ul>\n<li><p>$x \\in (1,\\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have\n$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\ge x^2\\inf\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\le x\\sup\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\ge x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&gt; 1}\\underbrace{5^{x^2-x}}_{&gt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li>\n<li><p>$x \\in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li>\n</ul>\n\n<p>$$\\begin{align}\nx^2\\xi^{x^2-1} &amp; \\le x^2\\sup\\{ t^{x^2-1} : t \\in (5,6) \\} = x^2 \\left(\\inf\\{ t : t \\in (5,6)\\}\\right)^{x^2-1} = x^2 5^{x^2-1}\\\\\nx\\eta^{x-1} &amp; \\ge x\\inf\\{ t^{x-1} : t \\in (4,5)\\} = x\\left(\\sup\\{ t : t \\in (4,5)\\}\\right)^{x-1} = x 5^{x-1}\n\\end{align}\\\\\n\\implies\n{\\tt LHS} - {\\tt RHS} \\le x^2 5^{x^2-1} - x5^{x-1} = (\\underbrace{x}_{&lt; 1}\\underbrace{5^{x^2-x}}_{&lt;1} - 1)\\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p>\n\n<ul>\n<li>$x \\in (-\\infty,0)$ - We have \n$$x^2 \\xi^{x^2-1} &gt; 0 \\land x \\eta^{x-1} &lt; 0\n\\quad\\implies\\quad\n{\\tt LHS} - {\\tt RHS} = x^2\\xi^{x^2-1} - x\\eta^{x-1} &gt; 0$$</li>\n</ul>\n\n<p>Combine these 3 cases, we have ${\\tt LHS} \\ne {\\tt RHS}$ for $x \\in \\mathbb{R} \\setminus \\{ 0, 1 \\}$. \nSince we know $0, 1$ are roots of the equation at hand,\n$0$ and $1$ are all the real roots of the equation.</p>\n" }, { "answer_id": 2079568, "score": 7, "text": "<p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p>\n\n<p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\\infty,x_1]$, </p>\n\n<p>$f$ is an increasing function on $[x_1,+\\infty)$,</p>\n\n<p>which says that our equation has two roots maximum and we are done</p>\n\n<p>because $0$ and $1$ are roots. </p>\n" } ]
Question: Find all real solutions to the equation$4^x+6^{x^2}=5^x+5^{x^2}$ Answer 1: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 2: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 3: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 4: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 5: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 6: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 7: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 8: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 9: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 10: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 11: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 12: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 13: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 14: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 15: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 16: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 17: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 18: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 19: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 20: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 21: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 22: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 23: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 24: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p> Answer 25: <p>For any $x \in \mathbb{R}$, apply MVT to $t^{x^2}$ on $[5,6]$ and $t^x$ on $[4,5]$, we find a pair of numbers $\xi \in (5,6)$ and $\eta \in (4,5)$ (both dependent on $x$) such that</p> <p>$$6^{x^2} - 5^{x^2} = x^2 \xi^{x^2-1}\quad\text{ and }\quad 5^x - 4^x = x\eta^{x-1}$$ This implies $$ {\tt LHS}-{\tt RHS} = (4^{x} + 6^{x^2}) - (5^x + 5^{x^2}) = x^2\xi^{x^2-1} - x\eta^{x-1} $$</p> <p>There are 3 cases we need to study:</p> <ul> <li><p>$x \in (1,\infty)$ - Both $x^2 - 1$ and $x - 1$ are positive, we have $$\begin{align} x^2\xi^{x^2-1} &amp; \ge x^2\inf\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \le x\sup\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \ge x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&gt; 1}\underbrace{5^{x^2-x}}_{&gt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &gt; 0$$</p></li> <li><p>$x \in (0,1)$ - Both $x^2-1$ and $x-1$ are negative, we have</p></li> </ul> <p>$$\begin{align} x^2\xi^{x^2-1} &amp; \le x^2\sup\{ t^{x^2-1} : t \in (5,6) \} = x^2 \left(\inf\{ t : t \in (5,6)\}\right)^{x^2-1} = x^2 5^{x^2-1}\\ x\eta^{x-1} &amp; \ge x\inf\{ t^{x-1} : t \in (4,5)\} = x\left(\sup\{ t : t \in (4,5)\}\right)^{x-1} = x 5^{x-1} \end{align}\\ \implies {\tt LHS} - {\tt RHS} \le x^2 5^{x^2-1} - x5^{x-1} = (\underbrace{x}_{&lt; 1}\underbrace{5^{x^2-x}}_{&lt;1} - 1)\underbrace{x5^{x-1}}_{&gt;0} &lt; 0$$</p> <ul> <li>$x \in (-\infty,0)$ - We have $$x^2 \xi^{x^2-1} &gt; 0 \land x \eta^{x-1} &lt; 0 \quad\implies\quad {\tt LHS} - {\tt RHS} = x^2\xi^{x^2-1} - x\eta^{x-1} &gt; 0$$</li> </ul> <p>Combine these 3 cases, we have ${\tt LHS} \ne {\tt RHS}$ for $x \in \mathbb{R} \setminus \{ 0, 1 \}$. Since we know $0, 1$ are roots of the equation at hand, $0$ and $1$ are all the real roots of the equation.</p> Answer 26: <p>Let $f(x)=4^x+6^{x^2}-5^x-5^{x^2}$.</p> <p>$f'(x)=0$ for $x_1=0.7655...$ only and $f$ is a decreasing function on $(-\infty,x_1]$, </p> <p>$f$ is an increasing function on $[x_1,+\infty)$,</p> <p>which says that our equation has two roots maximum and we are done</p> <p>because $0$ and $1$ are roots. </p>
Understanding an example in Ramsey Theory
2,079,586
6
https://math.stackexchange.com/questions/2079586/understanding-an-example-in-ramsey-theory
<p>I am reading The Man Who Loved Only Numbers, a biography of Paul Erdos, and I came across an example that I don't quite understand. The book asserts that you can rearrange the first $101$ integers in any order you like and you will always be able to find an increasing or decreasing sequence of eleven integers. What if I choose $1, 101, 2, 100, 3, 99, 4, 98....$ Will I ever find an increasing or decreasing sequence of eleven if I choose this arrangement? Possibly I am misunderstanding the assertion.</p>
[ { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" }, { "answer_id": 2079592, "score": 6, "text": "<p>I will just pick </p>\n\n<p>$$1,2,3,4,5,6,7,8,9,10,11$$</p>\n\n<p>The book says that </p>\n\n<p>\"You don't have to pick the numbers consecutively,\" Graham said\" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down\"</p>\n\n<p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p>\n" }, { "answer_id": 2370016, "score": 0, "text": "<p>I was confused by this passage in the book too, and here's what I've learned:</p>\n\n<p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p>\n\n<p>This is the example set that Graham shows in the book:</p>\n\n<p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100,\n 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, \n 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80,\n 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, \n 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60,\n 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,\n 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n 21, 22, 23, 24, 25, 26, 27, 28, 29, 30,\n 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,\n 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n</code></p>\n\n<p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p>\n\n<p>...but there are also sequences like this:</p>\n\n<p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>Or:</p>\n\n<p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p>\n\n<p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p>\n\n<p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p>\n\n<p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p>\n\n<p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p>\n\n<p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p>\n\n<p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p>\n\n<p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p>\n\n<p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>\n" } ]
Question: Understanding an example in Ramsey Theory Answer 1: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 2: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 3: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 4: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 5: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 6: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 7: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 8: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 9: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 10: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 11: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 12: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 13: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 14: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 15: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 16: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 17: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 18: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 19: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 20: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 21: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 22: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 23: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 24: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p> Answer 25: <p>I will just pick </p> <p>$$1,2,3,4,5,6,7,8,9,10,11$$</p> <p>The book says that </p> <p>"You don't have to pick the numbers consecutively," Graham said" You can jump. You might pick the first one, then the nineteenth one, then the twenty-second one, then the thirty-eighth one-but they all have to be going up or going down"</p> <p>Ramsey theory, says Graham, makes a generalization of this result: to guarantee either a rising or falling sequence of length $n + 1$, you need $n^2 + 1$ numbers; with $ n^2$ numbers, you may not get it.</p> Answer 26: <p>I was confused by this passage in the book too, and here's what I've learned:</p> <p><strong>The term 'sequence' does not necessarily imply that the numbers are consecutive.</strong></p> <p>This is the example set that Graham shows in the book:</p> <p><code>91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 71, 72, 73, 74 ,75 ,76 ,77, 78, 79, 80, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 51, 52, 53 ,54, 55, 56, 57, 58, 59, 60, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 </code></p> <p>The reason this is such a good example is that, even by skipping over numbers, you can't find a sequence of increasing/decreasing numbers that is longer than ten, but there are lots of examples that are exactly ten. The most obvious example is any one of the rows:</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50</code></p> <p>...but there are also sequences like this:</p> <p><code>94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>Or:</p> <p><code>100, 84, 77, 63, 54, 49, 31, 22, 15, 6</code></p> <p>The point is, we can't find any sequence like this longer than 10. Now if we throw in <code>101</code> at the very beginning of this set, suddenly such a sequence is possible. In fact, you could put <code>101</code> in lots of places in this set and you would be able to form a sequence of 11. If it's at the very beginning of the set, we can move down a column like so:</p> <p><code>101, 94, 84, 74, 64, 54, 44, 34, 24, 14, 4</code></p> <p>and if it's in the middle, or near the end of the set, we can just slap it on the end of any one of the rows to get 11.</p> <p><code>41. 42. 43. 44. 45. 46. 47. 48. 49. 50, 101</code></p> <p>If you wanted to disprove Ramsey theory, the idea would be to find a way to arrange the numbers so that you can't find any ordered sequence of 11 digits. The example set we use here is not notable for making sequences easy to make, it's notable for making them as hard to make as possible. If you tried to arrange the order another way, like this for example:</p> <p><code>1, 3, 2, 4, 5, 7, 6, 8...</code></p> <p>... you wouldn't be making it harder to construct a long sequence, you would be making it easier. Remember, sequences can be non-consecutive and with this approach you would end up finding sequences much longer than 11 in length, and proving Ramsey right. </p> <p>However, there are some places you could stick that 101st digit which would NOT result in an 11-digit streak. To guarantee that you could always find a sequence of 11 digits in order, you would need the numbers from 1 to 11^2, or 121 in total.</p>
Conductor $0$ implies unramified
2,079,951
6
https://math.stackexchange.com/questions/2079951/conductor-0-implies-unramified
<blockquote> <p>Suppose that $L/K$ is a finite abelian extension of $K$ a local field. Suppose that $U_K$ is contained in $N(L^*)$. Then $L/K$ is unramified. </p> </blockquote> <p>What I've tried: </p> <p>I know that $L/K$ is unramified is equivalent to saying that the inertia subgroup is trivial. So take an $s \in G$ such that $s$ acts trivially on $\mathcal{O}_L / p_L$, i.e. $s(x) - x \in p_L$ for every $x \in \mathcal{O}_L$. However, I don't see how to relate this with using that $U_K = \mathcal{O}_K - p_K \subseteq N(L^*)$. </p> <p>I try to take a uniformizers $\pi_L$ and $\pi_K$. Then I know that $|\pi_L|_L = |\pi_K|_K ^ e$ and try showing that $e = 1$ here. Once again though, I struggle to relate units in $\mathcal{O}_K$. </p> <p>I know that units of $\mathcal{O}_K$ should be elements with magnitude $1$ or equivalently outside the maximal ideal. I don't understand how to relate this with the norm of $L^*$. </p>
[ { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" }, { "answer_id": 2081018, "score": 3, "text": "<p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \\to U_L \\to L^* \\to Z \\to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \\cong K^*/NL^* \\cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p>\n" }, { "answer_id": 2079980, "score": 2, "text": "<p>$N(L^{\\times})$ will also contain some power of $\\pi_K$ since it has finite index in $K^{\\times}$. (By local reciprocity, the index is $[L:K]$)</p>\n\n<p>If $U_K \\times \\pi_K^n$ is contained in $N(L^{\\times})$, then $N(L^{\\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>\n" } ]
Question: Conductor $0$ implies unramified Answer 1: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 2: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 3: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 4: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 5: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 6: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 7: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 8: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 9: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 10: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 11: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 12: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 13: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 14: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 15: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 16: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 17: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 18: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 19: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 20: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 21: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 22: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 23: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 24: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p> Answer 25: <p>As the problem is presented (a finite abelian extension $L/K$ + an additional condition on norms), the use of CFT seems inevitable. But let us try to keep it minimal. First suppose that $L/K$ is only galois, of degree $n$ and residual degree $f$. Consider the exact sequence of valuation at level $L$, say $0 \to U_L \to L^* \to Z \to 0$ and norm it down to the analogous exact sequence at level $K$. (Ideally, one should write down a commutative diagram, but I don't know how to). By definition of the inertia index, the norm just induces multiplication by $f$ on $Z$. By hypothesis, the norm from $U_L$ to $U_K$ is surjective, hence the snake lemma applied to our commutative diagram immediately shows that $K^*/NL^*\cong Z/fZ$ . <em>Then</em> suppose $L/K$ abelian, so that by local CFT, $Gal(L/K) \cong K^*/NL^* \cong Z/fZ$. In particular $[L:K]=f$, i.e. $L/K$ is unramified.</p> Answer 26: <p>$N(L^{\times})$ will also contain some power of $\pi_K$ since it has finite index in $K^{\times}$. (By local reciprocity, the index is $[L:K]$)</p> <p>If $U_K \times \pi_K^n$ is contained in $N(L^{\times})$, then $N(L^{\times})$ contains the norm group of the unramified extension of $K$ of degree $n$, so $L$ is contained in the unramified extension of degree $n$, so it is also unramified.</p>
Help find closed form for:$\sum_{n=0}^{\infty}{(-1)^n\left({{\pi\over 2}}\right)^{2n}\over (2n+k)!}=F(k)$
2,080,167
6
https://math.stackexchange.com/questions/2080167/help-find-closed-form-for-sum-n-0-infty-1n-left-pi-over-2-right
<p>What is the closed form for</p> <p>$$\sum_{n=0}^{\infty}{(-1)^n\left({{\pi\over 2}}\right)^{2n}\over (2n+k)!}=F(k)?$$</p> <p>My try:</p> <p>I have found a few values of $F(k)$, but was unable to find a closed form for it.</p> <p>$F(0)=0$</p> <p>$F(1)={2\over \pi}$</p> <p>$F(2)=\left({2\over \pi}\right)^2$</p> <p>$F(3)=\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^3$</p> <p>$F(4)={1\over 2}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4$</p> <p>$F(5)={1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$</p>
[ { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" }, { "answer_id": 2080179, "score": 8, "text": "<p>Note that</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n)! } = \\cos(x)$$</p>\n\n<p>You can reach the result by integrating $k$-times. </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^nx^{2n}}{(2n+k)! } = \\frac{1}{x^{k}}\\int^{x}_0 \\mathrm{d}t_{k-1}\\int^{t_{k-1}}_0 \\mathrm{d}t_{k-2} \\cdots\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) $$</p>\n\n<p>For example when $k=1$</p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+1)! } = \\frac{2}{\\pi}\\int^{\\pi/2}_0 \\mathrm{d}t_0 \\cos(t_0)\\,= \\frac{2}{\\pi}$$</p>\n\n<p>For $k=2$</p>\n\n<p>\\begin{align}\n\\sum_{n=0}^\\infty \\frac{(-1)^n(\\pi/2)^{2n}}{(2n+2)! } &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\int^{t_1}_0\\mathrm{d}t_0\\cos(t_0) \\\\ &amp;= \\left( \\frac{2}{\\pi}\\right)^2\\int^{\\pi/2}_0 \\mathrm{d}t_{1}\\sin(t_1)\\\\\n&amp; = \\left( \\frac{2}{\\pi}\\right)^2\n\\end{align}</p>\n\n<p>For </p>\n\n<p>$$f_k(x) = \\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p>\n\n<p>We can define recursively </p>\n\n<p>$$f_0(x) = \\cos(x)$$</p>\n\n<p>$$f_k(x) =\\int^x_0 f_{k-1}(t)\\,dt$$</p>\n\n<p>By solving the recursive formula </p>\n\n<p>$$\\sum_{n=0}^\\infty \\frac{(-1)^n x^{2n+k}}{(2n+k)!} = \\begin{cases} \\sum_{n=0}^{\\lceil k/2 \\rceil-2}\\frac{x^{2n+1}(-1)^{\\lceil k/2 \\rceil+n}}{(2n+1)!}-(-1)^{\\lceil k/2 \\rceil}\\sin(x) &amp; \\text{If $k$ is odd} \\\\ \\sum_{n=0}^{k/2-1}\\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\\cos(x) &amp; \\text{If $k$ is even}\\end{cases}$$</p>\n\n<p>Finally we have the closed form </p>\n\n<blockquote>\n <p>$$F(k) = \\begin{cases}\\left( \\frac{2}{\\pi}\\right)^k \\sum_{n=0}^{\\lceil\n k/2 \\rceil-2}\\frac{(\\pi/2)^{2n+1}(-1)^{\\lceil k/2\n \\rceil+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^k(-1)^{\\lceil k/2\n \\rceil} &amp; \\text{If $k$ is odd} \\\\ \\left(\n \\frac{2}{\\pi}\\right)^k\\sum_{n=0}^{k/2-1}\\frac{(\\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp;\n \\text{If $k$ is even}\\end{cases}$$</p>\n</blockquote>\n\n<p>To check the correctness of the formula </p>\n\n<p>For $k=5$ we have </p>\n\n<p>$$F(5)=\\left(\\frac{2}{\\pi}\\right)^5 \\sum_{n=0}^{1}\\frac{(\\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\\left( \\frac{2}{\\pi}\\right)^5(-1)^{3} = {1\\over 6}\\left({2\\over \\pi}\\right)^2-\\left({2\\over \\pi}\\right)^4+\\left({2\\over \\pi}\\right)^5$$</p>\n" }, { "answer_id": 2080175, "score": 4, "text": "<p>Your sum is the special value at $x=(-(\\pi/2)^2)$ of \n$$\\sum_{n=0}^\\infty \\frac{x^n}{(2n + k)!}=\\frac{\\, _1F_2\\left(1;\\frac{k}{2}+\\frac{1}{2},\\frac{k}{2}+1;\\frac{x}{4}\\right)}{k!}$$</p>\n" }, { "answer_id": 2081184, "score": 2, "text": "<p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br>\nIn fact, premised that for a general function of a integer $f(k)$ we have\n$$\n\\begin{gathered}\n \\frac{{\\left( {i^{\\,k} + \\left( { - i} \\right)^{\\,k} } \\right)}}\n{2}f(k) = \\left( {\\frac{{e^{\\,i\\,k\\frac{\\pi }\n{2}} + e^{\\, - \\,i\\,k\\frac{\\pi }\n{2}} }}\n{2}} \\right)f(k) = \\cos \\left( {k\\frac{\\pi }\n{2}} \\right)f(k) = \\hfill \\\\\n = \\left[ {k = 2j} \\right]\\left( { - 1} \\right)^{\\,j} f(2j)\\quad \\left| {\\;k,j\\; \\in \\;\\;\\mathbb{Z}\\,} \\right. \\hfill \\\\ \n\\end{gathered} \n$$\nand that the <em>Lower Incomplete Gamma</em> function can be expressed as:\n$$\n\\begin{gathered}\n \\gamma (s,z) = \\int_{\\,0\\,}^{\\,z\\,} {t^{\\,s - 1} \\;e^{\\, - \\,t} dt} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\;\\Gamma (s)\\;\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\Gamma (s + k + 1)}}} = \\frac{{z^{\\,s} \\;e^{\\,\\, - z} }}\n{s}\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{\\left( {s + 1} \\right)^{\\,\\overline {\\,k\\,} } \\,}}} = \\hfill \\\\\n = z^{\\,s} \\;e^{\\,\\, - z} \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{z^{\\,k} }}\n{{s^{\\,\\overline {\\,k + 1\\,} } \\,}}} = z^{\\,s} \\sum\\limits_{0\\, \\leqslant \\,j} {\\frac{{\\left( { - 1} \\right)^{\\,j} }}\n{{\\,\\left( {s + j} \\right)}}\\frac{{z^{\\,j} }}\n{{j!}}} \\hfill \\\\ \n\\end{gathered} \n$$\nwe can then write\n$$\n\\begin{gathered}\n F(x,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{x^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\frac{1}\n{2}\\left( {\\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( {i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} + \\sum\\limits_{0\\, \\leqslant \\,k} {\\frac{{\\left( { - \\,i\\,x} \\right)^{\\,k} }}\n{{\\left( {k + m} \\right)!}}} } \\right) = \\hfill \\\\\n = \\frac{1}\n{{2\\,\\Gamma (m)}}\\left( {\\frac{{e^{\\,\\,i\\,x} \\,\\gamma (m,i\\,x)}}\n{{\\left( {i\\,x} \\right)^{\\,m} }} + \\frac{{e^{\\, - \\,i\\,x} \\,\\gamma (m, - i\\,x)}}\n{{\\left( { - i\\,x} \\right)^{\\,m} }}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,x)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\frac{1}\n{{\\left| x \\right|^{\\,\\,m} }}\\,\\operatorname{Re} \\left( {e^{\\,\\,i\\,x} e^{\\,\\, - \\,i\\,m\\,\\left( {sign(x)\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,x)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$</p>\n\n<p>So, for $x=\\pi /2$ we get\n$$\n\\begin{gathered}\n F(\\pi /2,m) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + m} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\frac{{\\gamma (m,i\\,\\pi /2)}}\n{{\\Gamma (m)}}} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,m} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,\\left( {m - 1} \\right)\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(m,i\\,\\pi /2)} \\right)} \\right) \\hfill \\\\ \n\\end{gathered} \n$$\n<strong>example with $m=5$</strong>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\left( {e^{\\,\\, - \\,i\\,4\\,\\left( {\\pi /2} \\right)} \\,\\left( {1 - Q(5,i\\,\\pi /2)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\,\\operatorname{Re} \\,\\left( {1 - \\frac{1}\n{{24}}\\left( {12\\,\\pi - \\frac{1}\n{2}\\pi ^{\\,3} + i\\left( {3\\pi ^{\\,2} - \\frac{1}\n{{16}}\\pi ^{\\,4} - 24} \\right)} \\right)} \\right) = \\hfill \\\\\n = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} \\left( {1 - \\,\\frac{\\pi }\n{2} + \\frac{1}\n{{48}}\\pi ^{\\,3} } \\right) = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} \\hfill \\\\ \n\\end{gathered} \n$$\nwhich matches with the value given by <em>Zaid Alyafeai</em>,\n as well as with the fomula indicated by <em>Igor Rivin</em>\n$$\n\\begin{gathered}\n F(\\pi /2,5) = \\sum\\limits_{0\\, \\leqslant \\,k} {\\left( { - 1} \\right)^{\\,k} \\frac{{\\left( {\\pi /2} \\right)^{\\,2k} }}\n{{\\left( {2k + 5} \\right)!}}} = \\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,5} - \\,\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,\\,4} + \\frac{1}\n{6}\\left( {\\frac{2}\n{\\pi }} \\right)^{\\,2} = \\hfill \\\\\n = 0.007860176 \\cdots = \\hfill \\\\\n = \\frac{1}\n{{5!}}{}_1F_2 \\left( {1\\;;\\;\\frac{5}\n{2} + \\frac{1}\n{2},\\;\\frac{5}\n{2} + 1\\;;\\; - \\frac{1}\n{4}\\left( {\\frac{\\pi }\n{2}} \\right)^{\\,2} } \\right) \\hfill \\\\ \n\\end{gathered} \n$$\nand concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>\n" } ]
Question: Help find closed form for:$\sum_{n=0}^{\infty}{(-1)^n\left({{\pi\over 2}}\right)^{2n}\over (2n+k)!}=F(k)$ Answer 1: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 2: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 3: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 4: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 5: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 6: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 7: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 8: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 9: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 10: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 11: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 12: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 13: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 14: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 15: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 16: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 17: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 18: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 19: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 20: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 21: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 22: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 23: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 24: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 25: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 26: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 27: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 28: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 29: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 30: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 31: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 32: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 33: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p> Answer 34: <p>Note that</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n)! } = \cos(x)$$</p> <p>You can reach the result by integrating $k$-times. </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^nx^{2n}}{(2n+k)! } = \frac{1}{x^{k}}\int^{x}_0 \mathrm{d}t_{k-1}\int^{t_{k-1}}_0 \mathrm{d}t_{k-2} \cdots\int^{t_1}_0\mathrm{d}t_0\cos(t_0) $$</p> <p>For example when $k=1$</p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+1)! } = \frac{2}{\pi}\int^{\pi/2}_0 \mathrm{d}t_0 \cos(t_0)\,= \frac{2}{\pi}$$</p> <p>For $k=2$</p> <p>\begin{align} \sum_{n=0}^\infty \frac{(-1)^n(\pi/2)^{2n}}{(2n+2)! } &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\int^{t_1}_0\mathrm{d}t_0\cos(t_0) \\ &amp;= \left( \frac{2}{\pi}\right)^2\int^{\pi/2}_0 \mathrm{d}t_{1}\sin(t_1)\\ &amp; = \left( \frac{2}{\pi}\right)^2 \end{align}</p> <p>For </p> <p>$$f_k(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!}$$</p> <p>We can define recursively </p> <p>$$f_0(x) = \cos(x)$$</p> <p>$$f_k(x) =\int^x_0 f_{k-1}(t)\,dt$$</p> <p>By solving the recursive formula </p> <p>$$\sum_{n=0}^\infty \frac{(-1)^n x^{2n+k}}{(2n+k)!} = \begin{cases} \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{x^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-(-1)^{\lceil k/2 \rceil}\sin(x) &amp; \text{If $k$ is odd} \\ \sum_{n=0}^{k/2-1}\frac{x^{2n}(-1)^{n+k/2}}{(2n)!}-(-1)^{k/2}\cos(x) &amp; \text{If $k$ is even}\end{cases}$$</p> <p>Finally we have the closed form </p> <blockquote> <p>$$F(k) = \begin{cases}\left( \frac{2}{\pi}\right)^k \sum_{n=0}^{\lceil k/2 \rceil-2}\frac{(\pi/2)^{2n+1}(-1)^{\lceil k/2 \rceil+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^k(-1)^{\lceil k/2 \rceil} &amp; \text{If $k$ is odd} \\ \left( \frac{2}{\pi}\right)^k\sum_{n=0}^{k/2-1}\frac{(\pi/2)^{2n}(-1)^{n+k/2}}{(2n)!}&amp; \text{If $k$ is even}\end{cases}$$</p> </blockquote> <p>To check the correctness of the formula </p> <p>For $k=5$ we have </p> <p>$$F(5)=\left(\frac{2}{\pi}\right)^5 \sum_{n=0}^{1}\frac{(\pi/2)^{2n+1}(-1)^{3+n}}{(2n+1)!}-\left( \frac{2}{\pi}\right)^5(-1)^{3} = {1\over 6}\left({2\over \pi}\right)^2-\left({2\over \pi}\right)^4+\left({2\over \pi}\right)^5$$</p> Answer 35: <p>Your sum is the special value at $x=(-(\pi/2)^2)$ of $$\sum_{n=0}^\infty \frac{x^n}{(2n + k)!}=\frac{\, _1F_2\left(1;\frac{k}{2}+\frac{1}{2},\frac{k}{2}+1;\frac{x}{4}\right)}{k!}$$</p> Answer 36: <p>The sum can also be expressed in terms of the <em>Regularized Incomplete Gamma</em> function ($Q(a,z)$).<br> In fact, premised that for a general function of a integer $f(k)$ we have $$ \begin{gathered} \frac{{\left( {i^{\,k} + \left( { - i} \right)^{\,k} } \right)}} {2}f(k) = \left( {\frac{{e^{\,i\,k\frac{\pi } {2}} + e^{\, - \,i\,k\frac{\pi } {2}} }} {2}} \right)f(k) = \cos \left( {k\frac{\pi } {2}} \right)f(k) = \hfill \\ = \left[ {k = 2j} \right]\left( { - 1} \right)^{\,j} f(2j)\quad \left| {\;k,j\; \in \;\;\mathbb{Z}\,} \right. \hfill \\ \end{gathered} $$ and that the <em>Lower Incomplete Gamma</em> function can be expressed as: $$ \begin{gathered} \gamma (s,z) = \int_{\,0\,}^{\,z\,} {t^{\,s - 1} \;e^{\, - \,t} dt} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \;\Gamma (s)\;\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\Gamma (s + k + 1)}}} = \frac{{z^{\,s} \;e^{\,\, - z} }} {s}\sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{\left( {s + 1} \right)^{\,\overline {\,k\,} } \,}}} = \hfill \\ = z^{\,s} \;e^{\,\, - z} \sum\limits_{0\, \leqslant \,k} {\frac{{z^{\,k} }} {{s^{\,\overline {\,k + 1\,} } \,}}} = z^{\,s} \sum\limits_{0\, \leqslant \,j} {\frac{{\left( { - 1} \right)^{\,j} }} {{\,\left( {s + j} \right)}}\frac{{z^{\,j} }} {{j!}}} \hfill \\ \end{gathered} $$ we can then write $$ \begin{gathered} F(x,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{x^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \frac{1} {2}\left( {\sum\limits_{0\, \leqslant \,k} {\frac{{\left( {i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} + \sum\limits_{0\, \leqslant \,k} {\frac{{\left( { - \,i\,x} \right)^{\,k} }} {{\left( {k + m} \right)!}}} } \right) = \hfill \\ = \frac{1} {{2\,\Gamma (m)}}\left( {\frac{{e^{\,\,i\,x} \,\gamma (m,i\,x)}} {{\left( {i\,x} \right)^{\,m} }} + \frac{{e^{\, - \,i\,x} \,\gamma (m, - i\,x)}} {{\left( { - i\,x} \right)^{\,m} }}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\frac{{\gamma (m,i\,x)}} {{\Gamma (m)}}} \right) = \hfill \\ = \frac{1} {{\left| x \right|^{\,\,m} }}\,\operatorname{Re} \left( {e^{\,\,i\,x} e^{\,\, - \,i\,m\,\left( {sign(x)\pi /2} \right)} \,\left( {1 - Q(m,i\,x)} \right)} \right) \hfill \\ \end{gathered} $$</p> <p>So, for $x=\pi /2$ we get $$ \begin{gathered} F(\pi /2,m) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + m} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\frac{{\gamma (m,i\,\pi /2)}} {{\Gamma (m)}}} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,m} \,\operatorname{Re} \left( {e^{\,\, - \,i\,\left( {m - 1} \right)\,\left( {\pi /2} \right)} \,\left( {1 - Q(m,i\,\pi /2)} \right)} \right) \hfill \\ \end{gathered} $$ <strong>example with $m=5$</strong> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \left( {e^{\,\, - \,i\,4\,\left( {\pi /2} \right)} \,\left( {1 - Q(5,i\,\pi /2)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \,\operatorname{Re} \,\left( {1 - \frac{1} {{24}}\left( {12\,\pi - \frac{1} {2}\pi ^{\,3} + i\left( {3\pi ^{\,2} - \frac{1} {{16}}\pi ^{\,4} - 24} \right)} \right)} \right) = \hfill \\ = \left( {\frac{2} {\pi }} \right)^{\,\,5} \left( {1 - \,\frac{\pi } {2} + \frac{1} {{48}}\pi ^{\,3} } \right) = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} \hfill \\ \end{gathered} $$ which matches with the value given by <em>Zaid Alyafeai</em>, as well as with the fomula indicated by <em>Igor Rivin</em> $$ \begin{gathered} F(\pi /2,5) = \sum\limits_{0\, \leqslant \,k} {\left( { - 1} \right)^{\,k} \frac{{\left( {\pi /2} \right)^{\,2k} }} {{\left( {2k + 5} \right)!}}} = \left( {\frac{2} {\pi }} \right)^{\,\,5} - \,\left( {\frac{2} {\pi }} \right)^{\,\,4} + \frac{1} {6}\left( {\frac{2} {\pi }} \right)^{\,2} = \hfill \\ = 0.007860176 \cdots = \hfill \\ = \frac{1} {{5!}}{}_1F_2 \left( {1\;;\;\frac{5} {2} + \frac{1} {2},\;\frac{5} {2} + 1\;;\; - \frac{1} {4}\left( {\frac{\pi } {2}} \right)^{\,2} } \right) \hfill \\ \end{gathered} $$ and concerning the latter, a computer calculation over multiple values of $m$ and $x$ shows a full match.</p>
Selling apples for two people
2,080,230
6
https://math.stackexchange.com/questions/2080230/selling-apples-for-two-people
<p>Let $A=\{0,1,\dots,n\}$. We have $n$ apples to be divided between two people. The two people have the same <em>nondecreasing</em> function $f:A\rightarrow\mathbb{R}_{\ge 0}$ that indicates the value upon receiving a certain number of apples. This function satisfies $f(0)=0$ and $f(x+y)\leq f(x)+f(y)$ for any $x,y\in A$.</p> <p>Let $M$ denote the optimal total value from dividing the apples. </p> <p>Define $m$ as follows. We are allowed to set a price $p$ per apple (possibly depending on $f$) and do the following to the first person, then the second person: If there are $k$ apples left, the next person will choose the number $l\in[0,k]$ of apples that maximizes the profit $f(l)-p\cdot l$. Let $m$ be the optimal total value from selling the apples in this way.</p> <p>Clearly, $m\leq M$. On the other hand, by setting price $p=0$ we can see that $m\geq \frac{1}{2}M$. Is there a constant $\frac{1}{2}&lt;c&lt;1$ such that $m\geq cM$ always holds?</p>
[ { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" }, { "answer_id": 2521101, "score": 1, "text": "<p>I will argue $m\\geq \\tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\\geq f(n_1)+f(n_2).$</p>\n\n<p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\\tfrac n2,\\tfrac M2).$ One of the points must lie on or above the line $y=M\\tfrac{n+x}{3n}$ because this line passes through $(\\tfrac n2,\\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line:\n$$f(n_1)\\geq M\\tfrac{n+n_1}{3n}.\\tag{1}$$</p>\n\n<p>If $n_1\\leq \\tfrac n2$ then $n_2=n_1$ and $m\\geq 2f(n_1)\\geq \\tfrac {2M}{3}.$</p>\n\n<p>For $n_1&gt;\\tfrac n2,$ we want to show that $f(n_2)\\geq \\tfrac 23 M-f(n_1).$ \nThis will be the case as long as there is a point $(x,f(x)),$ with $x\\leq n-n_1,$ lying on or above the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x.$ Let $k=\\lfloor n_1/(n-n_1)\\rfloor$ so that\n$$n_1=k(n-n_1)+((k+1)n_1-kn)$$\nwith $0\\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\\tfrac 23 M-f(n_1)+\\tfrac M{3n}x,$ so:</p>\n\n<ul>\n<li>$f(n-n_1)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}(n-n_1),$ and</li>\n<li>$f((k+1)n_1-kn)&lt;\\tfrac 23 M-f(n_1)+\\tfrac M{3n}((k+1)n_1-kn)$</li>\n</ul>\n\n<p>Subadditivity gives $f(n_1)\\leq kf(n-n_1)+f((k+1)n_1-kn),$ so\n\\begin{align*}\nf(n_1)&amp;\\leq (k+1)(\\tfrac 23 M-f(n_1))+\\tfrac {kM}{3n}(n-n_1)+\\tfrac M{3n}((k+1)n_1-kn)\\\\\n(k+2)f(n_1)&amp;\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.\n\\end{align*}\nUsing (1) we get\n$$(k+2)M\\tfrac{n+n_1}{3n}\\leq (k+1)\\tfrac 23 M+\\tfrac{Mn_1}{3n}.$$\nDividing by $M/3n$ gives $(k+2)(n+n_1)\\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\\geq \\tfrac 23M-f(n_1)$ and hence $m\\geq \\tfrac 23M.$</p>\n\n<p>For an upper bound $c\\leq \\tfrac 2 3$, take a large even integer $n,$ a small $\\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\\epsilon.$ This gives $m/M\\approx 2/3.$</p>\n\n<p>Note that even if the prices are allowed to vary (\"non-uniform pricing\"), there is a bound of $\\tfrac 3 4$ given by <a href=\"https://arxiv.org/abs/1705.06623\" rel=\"nofollow noreferrer\">Ezra et al, \"Pricing Identical Items\"</a> Proposition 5.6.</p>\n" }, { "answer_id": 2522989, "score": 1, "text": "<p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\\le\\frac{2n-3}{2n-2}$ for $n\\ge 4$. Indeed, pick any number $1\\le r&lt;\\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\\ge 2$ then $r-xp\\ge 1-p$ and $r-xp\\ge r+1-np$. That is $\\frac{r-1}{x-1}\\ge p\\ge\\frac 1{n-x}$ and $r\\ge\\frac{n-1}{n-x}\\ge \\frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\\le \\frac{1+\\frac{n-1}{n-2}}{\\frac{n-1}{n-2}}=\\frac{2n-3}{2n-2}.$$</p>\n\n<p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\\le 10$. </p>\n" } ]
Question: Selling apples for two people Answer 1: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 2: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 3: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 4: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 5: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 6: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 7: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 8: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 9: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 10: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 11: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 12: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 13: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 14: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 15: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 16: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 17: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 18: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 19: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 20: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 21: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 22: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p> Answer 23: <p>I will argue $m\geq \tfrac23 M.$ Set the price $p=M/3n.$ Denote the number of apples chosen by the first and second person by $n_1$ and $n_2$ respectively, so $m\geq f(n_1)+f(n_2).$</p> <p>If the optimal quantities in an optimal solution are $n_1^*$ and $n-n_1^*,$ the midpoint of $(n_1^*,f(n_1^*))$ and $(n-n_1^*,f(n-n_1^*))$ is $(\tfrac n2,\tfrac M2).$ One of the points must lie on or above the line $y=M\tfrac{n+x}{3n}$ because this line passes through $(\tfrac n2,\tfrac M2).$ So $(n_1,f(n_1))$ also lies above this line: $$f(n_1)\geq M\tfrac{n+n_1}{3n}.\tag{1}$$</p> <p>If $n_1\leq \tfrac n2$ then $n_2=n_1$ and $m\geq 2f(n_1)\geq \tfrac {2M}{3}.$</p> <p>For $n_1&gt;\tfrac n2,$ we want to show that $f(n_2)\geq \tfrac 23 M-f(n_1).$ This will be the case as long as there is a point $(x,f(x)),$ with $x\leq n-n_1,$ lying on or above the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x.$ Let $k=\lfloor n_1/(n-n_1)\rfloor$ so that $$n_1=k(n-n_1)+((k+1)n_1-kn)$$ with $0\leq (k+1)n_1-kn&lt;n-n_1.$ Suppose for contradiction that $(n-n_1,f(n-n_1))$ and $((k+1)n_1-kn,f((k+1)n_1-kn))$ lie below the line $y=\tfrac 23 M-f(n_1)+\tfrac M{3n}x,$ so:</p> <ul> <li>$f(n-n_1)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}(n-n_1),$ and</li> <li>$f((k+1)n_1-kn)&lt;\tfrac 23 M-f(n_1)+\tfrac M{3n}((k+1)n_1-kn)$</li> </ul> <p>Subadditivity gives $f(n_1)\leq kf(n-n_1)+f((k+1)n_1-kn),$ so \begin{align*} f(n_1)&amp;\leq (k+1)(\tfrac 23 M-f(n_1))+\tfrac {kM}{3n}(n-n_1)+\tfrac M{3n}((k+1)n_1-kn)\\ (k+2)f(n_1)&amp;\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}. \end{align*} Using (1) we get $$(k+2)M\tfrac{n+n_1}{3n}\leq (k+1)\tfrac 23 M+\tfrac{Mn_1}{3n}.$$ Dividing by $M/3n$ gives $(k+2)(n+n_1)\leq 2(k+1)n+n_1,$ which is $(k+1)n_1-nk&lt;0,$ a contradiction. This proves that $f(n_2)\geq \tfrac 23M-f(n_1)$ and hence $m\geq \tfrac 23M.$</p> <p>For an upper bound $c\leq \tfrac 2 3$, take a large even integer $n,$ a small $\epsilon&gt;0,$ and consider the function defined by $f(x)=n+x(1-\epsilon)$ for $0&lt;x&lt;n,$ and $f(n)=2n.$ The optimum $M=(3-\epsilon)n$ is given by choosing $n_1=n_2=n/2.$ If the price is below $M/3n$ then the first person takes all the apples, with value $2n.$ If the price is above $M/3n$ then both people take one apple, with total value $2n+2-2\epsilon.$ This gives $m/M\approx 2/3.$</p> <p>Note that even if the prices are allowed to vary ("non-uniform pricing"), there is a bound of $\tfrac 3 4$ given by <a href="https://arxiv.org/abs/1705.06623" rel="nofollow noreferrer">Ezra et al, "Pricing Identical Items"</a> Proposition 5.6.</p> Answer 24: <p>I can propose to MSE community the first and not very checked improvement of a trivial upper bound for $c$. Namely, we can show $c\le\frac{2n-3}{2n-2}$ for $n\ge 4$. Indeed, pick any number $1\le r&lt;\frac{n-1}{n-2}$ and put $f(1)=1$, $f(2)=f(3)=\dots=f(n-1)=r$, and $f(n)=r+1$. Then for each division $n=x+y$ with $x,y\in A$ we have $f(x)+f(y)=2r$ iff both $x$ and $y$ are at least $2$ and $f(x)+f(y)=r+1$, otherwise. Thus $M=2r$. If a price $p$ forces a divison $n=x+y$ with $x\ge 2$ then $r-xp\ge 1-p$ and $r-xp\ge r+1-np$. That is $\frac{r-1}{x-1}\ge p\ge\frac 1{n-x}$ and $r\ge\frac{n-1}{n-x}\ge \frac{n-1}{n-2}$, which is impossible. Thus $m=r+1$. Hence $$c\le \frac{1+\frac{n-1}{n-2}}{\frac{n-1}{n-2}}=\frac{2n-3}{2n-2}.$$</p> <p>The above example was plainly generalized from a case $n=4$. I guess a different function $f$ can provide better upper bounds for $c$, but I didn’t figured (yet) such a function $f$. I guess patterns for it can be suggested by computer search for $n\le 10$. </p>
Checking if matrix is diagonalizable
2,081,705
6
https://math.stackexchange.com/questions/2081705/checking-if-matrix-is-diagonalizable
<p>When trying to check if $$C=\begin{pmatrix} 1&amp;1&amp;1\\ 1&amp;1&amp;1\\ 1&amp;1&amp;1\\ \end{pmatrix}\in\Bbb R^{3\times3}$$ is diagonalizable, we find $c_C(x)=-x^2(x-3) \Rightarrow λ=0,3$ with $m(0)=2,m(3)=1$. So to find eigenvectors we solve $(C-λI)X=0$ which give $$\begin{pmatrix}x\\y\\z\\\end{pmatrix}\in\Bbb R^{3\times1}\setminus\{0\},\begin{pmatrix}x\\x\\x\\\end{pmatrix}\in\Bbb R^{3\times1}\setminus\{0\}$$ for $0$ and $3$ respectively. Is it this so far correct? If so then how do I proceed to finding if there exists a basis of $\Bbb R^{3\times1}$ from these eigenvectors?</p>
[ { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" }, { "answer_id": 2082991, "score": 2, "text": "<p>By using $\\dim V_C(λ)=\\dim(\\Bbb R^{3\\times3})−r(C−λI)$ we get that $\\dim V_C(0)=2$ and $\\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\\setminus \\{0\\}$, say $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}\\;,$$ and a basis of $V_C(3)\\setminus\\{0\\}$, say $$\\begin{pmatrix}1\\\\1\\\\1\\end{pmatrix}\\;.$$\nThese three vectors are a basis of $\\Bbb R^{3\\times3}$</p>\n" }, { "answer_id": 2082806, "score": 0, "text": "<p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\\begin{pmatrix}1\\\\-1\\\\0\\end{pmatrix}, \\begin{pmatrix}1\\\\0\\\\-1\\end{pmatrix}$$\nas the other two eigenvectors of $C$.</p>\n" } ]
Question: Checking if matrix is diagonalizable Answer 1: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 2: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 3: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 4: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 5: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 6: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 7: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 8: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 9: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 10: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 11: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 12: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 13: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 14: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 15: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 16: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 17: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 18: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 19: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 20: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 21: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 22: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p> Answer 23: <p>By using $\dim V_C(λ)=\dim(\Bbb R^{3\times3})−r(C−λI)$ we get that $\dim V_C(0)=2$ and $\dim V_C(3)=1$ so it is diagonalizable. Now from the solutions of the two systems we find a basis of $V_C(0)\setminus \{0\}$, say $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}\;,$$ and a basis of $V_C(3)\setminus\{0\}$, say $$\begin{pmatrix}1\\1\\1\end{pmatrix}\;.$$ These three vectors are a basis of $\Bbb R^{3\times3}$</p> Answer 24: <p>Observe the vector of all ones gives the eigenvalue $3$. So the other two eigenvectors are orthogonal to it. You can choose the vectors $$\begin{pmatrix}1\\-1\\0\end{pmatrix}, \begin{pmatrix}1\\0\\-1\end{pmatrix}$$ as the other two eigenvectors of $C$.</p>
Is this set of vectors linearly (in)dependent?
2,081,794
6
https://math.stackexchange.com/questions/2081794/is-this-set-of-vectors-linearly-independent
<p>I have the following problem:</p> <p>Are the following vectors linearly independent in $\mathbb{R}^2$? \begin{bmatrix} -1 \\ 2 \end{bmatrix}\begin{bmatrix} 1 \\ -2 \end{bmatrix}\begin{bmatrix} 2 \\ -4 \end{bmatrix}</p> <p>when I solve this using $c_1 v_1+c_2 v_2+ c_3 v_3=0$ </p> <p>I get an underdetermined system, can anyone help me to understand what this means for the linear independence?</p> <p>Thanks in advance :)</p>
[ { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" }, { "answer_id": 2081798, "score": 18, "text": "<p>$\\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\\mathbb R^2$ can <em>never</em> be linearly independent.</p>\n\n<p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p>\n" }, { "answer_id": 2081799, "score": 7, "text": "<p>$$0v_1+2v_2-v_3=0$$</p>\n\n<p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p>\n" }, { "answer_id": 2082211, "score": 7, "text": "<p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p>\n\n<p>$$c_1 \\begin{bmatrix} -1 \\\\ 2 \\end{bmatrix} + c_2 \\begin{bmatrix} 1 \\\\ -2 \\end{bmatrix} + c_3 \\begin{bmatrix} 2 \\\\ -4 \\end{bmatrix} = 0.$$</p>\n\n<h2>Correct answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$0 = 0.$$</p>\n\n<p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p>\n\n<p>$$c_1 = c_2 + 2c_3$$</p>\n\n<p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \\in \\mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p>\n\n<h2>Old answer</h2>\n\n<p>As you noticed, this gives you two equations in three unknowns</p>\n\n<p>$$\\begin{matrix}\n -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\\\\n 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0\n\\end{matrix}$$</p>\n\n<p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p>\n\n<p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p>\n\n<p>$$-2c_3 = 0 \\implies c_3 = 0.$$</p>\n\n<p>Then substituting back into the first one, you get </p>\n\n<p>$$-c_1 + c_2 = 0$$\nor\n$$c_1 = c_2,$$</p>\n\n<p>which is all the information you can squeeze out of those two equations.</p>\n\n<p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \\in \\mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that \n$$c_1 v_1+c_2 v_2+ c_3 v_3=0$$\ndoes not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p>\n\n<h2>The short way</h2>\n\n<p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the \"duplicates\". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p>\n\n<p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p>\n" }, { "answer_id": 2082104, "score": 1, "text": "<p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p>\n" }, { "answer_id": 2082734, "score": 1, "text": "<p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p>\n\n<p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p>\n\n<pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2)\n\n vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2)\n</code></pre>\n\n<p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p>\n\n<p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>\n" } ]
Question: Is this set of vectors linearly (in)dependent? Answer 1: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 2: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 3: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 4: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 5: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 6: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 7: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 8: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 9: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 10: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 11: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 12: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 13: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 14: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 15: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 16: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 17: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 18: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 19: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 20: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 21: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 22: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 23: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 24: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 25: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 26: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 27: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 28: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 29: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 30: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 31: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 32: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 33: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 34: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 35: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 36: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 37: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 38: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 39: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 40: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 41: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 42: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 43: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 44: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 45: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 46: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 47: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 48: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 49: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 50: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 51: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 52: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 53: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 54: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 55: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p> Answer 56: <p>$\mathbb R^2$ has dimension $2$, so a set of $3$ vectors from $\mathbb R^2$ can <em>never</em> be linearly independent.</p> <p>(In your case, the three vectors are even more dependent than they have to be, since they are all parallel).</p> Answer 57: <p>$$0v_1+2v_2-v_3=0$$</p> <p>In general, you can never have more than $k$ linearly independent vectors in a $k$-dimensional vector space</p> Answer 58: <p>Although other answers have pointed out the fast track (more than $n$ vectors cannot be linearly independent in an $n$-dimensional space), I just wanted to help you complete your own approach, which was to solve</p> <p>$$c_1 \begin{bmatrix} -1 \\ 2 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ -2 \end{bmatrix} + c_3 \begin{bmatrix} 2 \\ -4 \end{bmatrix} = 0.$$</p> <h2>Correct answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + 2c_3 &amp;{} = {}&amp; 0\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$0 = 0.$$</p> <p>Though this is true, it gives you absolutely no information. It means the second equation is just a multiple of the first, so any solution to one is also a solution to the other. All we have then, is the first equation, which I will rewrite as</p> <p>$$c_1 = c_2 + 2c_3$$</p> <p>If we let $c_2 = a$, $c_3 = b$, the equation tells us that any combination of $(c_1, c_2, c_3)$ of the form $(a + 2b, a, b)$ ($a, b \in \mathbb R$) solves the system. If you set $a = b = 0$, you get the trivial solution, but also $(1, 1, 0)$ or $(13, 3, 5)$ are possibilities. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $(a + 2b) v_1 + a v_2 + b v_3 = 0$, hence the vectors are not independent.</p> <h2>Old answer</h2> <p>As you noticed, this gives you two equations in three unknowns</p> <p>$$\begin{matrix} -c_1 &amp;{} + c_2 &amp;{} + c_3 &amp;{} = {}&amp; 0 &amp; (!)\\ 2 c_1 &amp;{} - 2 c_2 &amp;{} - 4 c_3 &amp;{} = {}&amp; 0 \end{matrix}$$</p> <p>(Note that the first equation marked with (!) is incorrect, due to a typo in the original post it has $c_3$ instead of $2c_3$, but I think that the rest of the post shows an important technique so I'm leaving this part in for reference).</p> <p>Since you have more variables than restrictions, if you can find <em>any</em> solution you will get infinitely many. For example, adding the first equation to the second one twice gives</p> <p>$$-2c_3 = 0 \implies c_3 = 0.$$</p> <p>Then substituting back into the first one, you get </p> <p>$$-c_1 + c_2 = 0$$ or $$c_1 = c_2,$$</p> <p>which is all the information you can squeeze out of those two equations.</p> <p>This means that any combination of $(c_1, c_2, c_3)$ of the form $(a, a, 0)$ ($a \in \mathbb R$) solves the system. If you set $a = 0$, you get the trivial solution, but also $(1, 1, 0)$ is a possibility. So you have shown that $$c_1 v_1+c_2 v_2+ c_3 v_3=0$$ does not just have the trivial solution, in fact you have shown it has infinitely many of them because any linear combination of the form $a v_1 + a v_2 = 0$, hence the vectors are not independent.</p> <h2>The short way</h2> <p>In the two variations above, we have basically explored why a system with fewer equations than unknowns is underdetermined. Either two or more equations are multiples of one another, in which case you can remove the "duplicates". The $k$ independent equations in the $n$ unknowns that you have left fix $k$ of the variables, but you get an $n - k$ dimensional solution space. In the correct answer, $n = 3, k = 1$ while in the first version I accidentally had $k = 2$ but still needed one constant, any arbitrary value of which gives a solution.</p> <p>Of course, you might have seen this right away, since $v_1 = -v_2$, and $v_3 = 2v_2$, and any of these is sufficient.</p> Answer 59: <p>The line $2x+y=0$ joining any two of the given points also passes through the origin, so they are not linearly independent.</p> Answer 60: <p>Actually all three vectors are multiples of each other, as is seen here without any calculation. This is enough to say that the vectors are linearly dependent. Even any two of your vectors are not linearly independent, but linearly dependent.</p> <p>I'm not fluent in writing math symbols (and not yet allowed to add comment), therefore I write somehow clumsy:</p> <pre><code> vector2 (1,-2) = (-1)*vector1 (-1,2) vector3 (2,-4) = 2*vector2 (1,-2) = (-2)*vector1 (-1,2) </code></pre> <p>What concerns your equation, which was your question: because your equation is underdetermined, this means the vectors are linearly dependent. This is the normal behavour of such an equation for linearly dependent vectors. </p> <p>In your case, even when you would set c1=1, the equation would remain undetermined, because v2 und v3 are linearly dependent. This would not happen if two of the vectors, say v2 and v3, were linearly independent, or they were not simply parallel as your vectors are.</p>
Using an integration technique:$ \int^b_a \left[f(x)+f^{-1}(x)\right]dx=b^2-a^2$
2,082,493
6
https://math.stackexchange.com/questions/2082493/using-an-integration-technique-intb-a-leftfxf-1x-rightdx-b2-a2
<p>I read about this integration technique on quora:</p> <blockquote> <blockquote> <p>If $a,b$ are fixed points of $f$, then $$ \int^b_a \left[f(x)+f^{-1}(x)\right]dx=b^2-a^2$$</p> </blockquote> </blockquote> <p>Apparently it was used in the final of the 2013 MIT Integration bee but I can't find that question anywhere..</p> <p><strong>REPHRASING QUESTION</strong>: Could someone show me some integrals that can be <em>cleverly</em> solved with this technique? </p>
[ { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" }, { "answer_id": 2082513, "score": 3, "text": "<p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p>\n\n<p>$$\\int_a^b f^{-1}(x)dx=\\int_a^b yf'(y)dy$$\n$$=\\big[yf(y)\\big]_a^b-\\int_a^b f(y)dy$$</p>\n\n<p>$$=bf(b)-af(a)-\\int_a^b f(x)dx.$$</p>\n\n<p>$$=b^2-a^2-\\int_a^b f(x)dx$$\nqed.\nfor example, we can take</p>\n\n<p>$f(x)=\\tan(\\frac{\\pi x}{4}), a=0, b=\\pm 1$.\nor</p>\n\n<p>$f(x)=\\sqrt{x^3+1}-1,\\; a=0,b=2$</p>\n" }, { "answer_id": 2082571, "score": 2, "text": "<p>For instance:</p>\n\n<p>$$\\int_0^1 \\frac{x(x-5)}{(x+1)(x-2)}dx = \\int_0^1\\left(\\frac{2x}{x+1} +\\frac{x}{2-x}\\right)dx = 1 - 0 = 1$$</p>\n" }, { "answer_id": 2082575, "score": 2, "text": "<p><strong>Problem</strong>: Evaluate\n$$\\int\\limits_{0}^{1}{\\arcsin x\\,dx}.$$\n<strong>Solution</strong>: Let $f(x) = \\sin\\left(\\frac{\\pi}{2}x\\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \\frac{2}{\\pi}\\arcsin x$, we have\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right) + \\frac{2}{\\pi}\\arcsin x\\,dx} = 1.$$\nSince\n$$ \\int\\limits_{0}^{1}{\\sin\\left(\\frac{\\pi}{2}x\\right)\\,dx} = \\frac{2}{\\pi}\\int\\limits_{0}^{\\pi/2}{\\sin u\\,du} = \\frac{2}{\\pi} $$\nit follows that\n$$ \\frac{2}{\\pi}\\int\\limits_{0}^{1}{\\arcsin x\\,dx} = 1-\\frac{2}{\\pi} \\implies \\int\\limits_{0}^{1}{\\arcsin x\\,dx} = \\boxed{\\frac{\\pi}{2}-1}. $$</p>\n\n<p>Of course, $\\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p>\n" }, { "answer_id": 2082511, "score": 0, "text": "<p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p>\n\n<p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \\, ds = dt$ to find $$\\int_a^b f^{-1}(t) \\, dt = \\int_a^b s f'(s) \\, ds$$ \nand then you can integrate by parts to find $$\\int_a^b s f'(s) \\, ds = b^2 - a^2 - \\int_a^b f(s) \\, ds.$$</p>\n" } ]
Question: Using an integration technique:$ \int^b_a \left[f(x)+f^{-1}(x)\right]dx=b^2-a^2$ Answer 1: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 2: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 3: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 4: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 5: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 6: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 7: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 8: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 9: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 10: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 11: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 12: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 13: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 14: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 15: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 16: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 17: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 18: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 19: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 20: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 21: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 22: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 23: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 24: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 25: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 26: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 27: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 28: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 29: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 30: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 31: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 32: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 33: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 34: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 35: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 36: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 37: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 38: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 39: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 40: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 41: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 42: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 43: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 44: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p> Answer 45: <p>If we put $x=f(y)$ with $dx=f'(y)dy$ and integrate by parts, we get</p> <p>$$\int_a^b f^{-1}(x)dx=\int_a^b yf'(y)dy$$ $$=\big[yf(y)\big]_a^b-\int_a^b f(y)dy$$</p> <p>$$=bf(b)-af(a)-\int_a^b f(x)dx.$$</p> <p>$$=b^2-a^2-\int_a^b f(x)dx$$ qed. for example, we can take</p> <p>$f(x)=\tan(\frac{\pi x}{4}), a=0, b=\pm 1$. or</p> <p>$f(x)=\sqrt{x^3+1}-1,\; a=0,b=2$</p> Answer 46: <p>For instance:</p> <p>$$\int_0^1 \frac{x(x-5)}{(x+1)(x-2)}dx = \int_0^1\left(\frac{2x}{x+1} +\frac{x}{2-x}\right)dx = 1 - 0 = 1$$</p> Answer 47: <p><strong>Problem</strong>: Evaluate $$\int\limits_{0}^{1}{\arcsin x\,dx}.$$ <strong>Solution</strong>: Let $f(x) = \sin\left(\frac{\pi}{2}x\right)$. Then $0$ and $1$ are fixed points of $f$, and $f$ is increasing (and hence invertible) on $(0,1)$. Since $f^{-1}(x) = \frac{2}{\pi}\arcsin x$, we have $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right) + \frac{2}{\pi}\arcsin x\,dx} = 1.$$ Since $$ \int\limits_{0}^{1}{\sin\left(\frac{\pi}{2}x\right)\,dx} = \frac{2}{\pi}\int\limits_{0}^{\pi/2}{\sin u\,du} = \frac{2}{\pi} $$ it follows that $$ \frac{2}{\pi}\int\limits_{0}^{1}{\arcsin x\,dx} = 1-\frac{2}{\pi} \implies \int\limits_{0}^{1}{\arcsin x\,dx} = \boxed{\frac{\pi}{2}-1}. $$</p> <p>Of course, $\arcsin x$ does have an elementary antiderivative which can be found using integration by parts, but this method seems faster to me.</p> Answer 48: <p>There are a few things going on here: $f$ is implicitly assumed to be continuous, and since $f$ is invertible it must be monotone. You may as well assume that $a &lt; b$. Then $f(a) = a &lt; b = f(b)$ so that $f$ is increasing. </p> <p>If $f$ is differentiable you can make the substitution $s = f^{-1}(t)$, $f'(s) \, ds = dt$ to find $$\int_a^b f^{-1}(t) \, dt = \int_a^b s f'(s) \, ds$$ and then you can integrate by parts to find $$\int_a^b s f'(s) \, ds = b^2 - a^2 - \int_a^b f(s) \, ds.$$</p>
What does a number look like that&#39;s in $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ that&#39;s not in $\mathbb{Z}[\sqrt{14}]$?
2,082,636
6
https://math.stackexchange.com/questions/2082636/what-does-a-number-look-like-thats-in-mathbbz-left-sqrt14-frac12-r
<p>I know the very basic basics of $\mathbb{Z}[\sqrt{14}]$. Numbers in it are of the form $a + b \sqrt{14}$, with $a, b \in \mathbb{Z}$. Numbers like $-3 + \sqrt{14}$ and $7 - 8 \sqrt{14}$. The norm function is $N(a + b \sqrt{14}) = a^2 - 14b^2$, which I'm told is not an Euclidean function even after the absolute value adjustment.</p> <p>Proposition $4.11$ in this paper by Franz Lemmermeyer</p> <p><a href="http://www.rzuser.uni-heidelberg.de/~hb3/publ/survey.pdf" rel="nofollow noreferrer">http://www.rzuser.uni-heidelberg.de/~hb3/publ/survey.pdf</a></p> <p>mentions $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$. I don't think I've ever read about a domain like that before, except perhaps in a very general way that the specifics eluded me.</p> <p>I'm guessing $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ contains all the same numbers of $\mathbb{Z}[\sqrt{14}]$ as well as some other numbers. <strong>What's the form of those other numbers? What are some concrete examples of those other numbers?</strong></p> <p>P.S. Proposition $4.11$ is on page $14$ of $56$ of the PDF. Looks like a brilliant survey. I need to print it out and sit down to read it beginning to end.</p>
[ { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" }, { "answer_id": 2082657, "score": 7, "text": "<p>An element of this ring can be considered a polynomial in $\\frac12$, with coefficients in $\\mathbf Z[\\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as\n$$\\frac{a+b\\sqrt{14}}{2^n} \\quad (a,b\\in\\mathbf Z).$$</p>\n\n<p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\\mathbf Z[\\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\\frac12$.</p>\n" }, { "answer_id": 2082665, "score": 6, "text": "<p>If $a,b \\in \\mathbb R$, then $\\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\\mathbb{Z}$.</p>\n\n<p>When $a=\\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p>\n\n<p>Therefore, a typical element of $\\mathbb{Z}\\left[\\sqrt{14}, \\frac{1}{2}\\right]$ is of the form $\\dfrac{u+v\\sqrt{14}}{2^n}$, for $u,v \\in \\mathbb Z$ and $n \\in \\mathbb N$.</p>\n" } ]
Question: What does a number look like that&#39;s in $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ that&#39;s not in $\mathbb{Z}[\sqrt{14}]$? Answer 1: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 2: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 3: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 4: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 5: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 6: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 7: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 8: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 9: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 10: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 11: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 12: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 13: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 14: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 15: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 16: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 17: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 18: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 19: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 20: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 21: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 22: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p> Answer 23: <p>An element of this ring can be considered a polynomial in $\frac12$, with coefficients in $\mathbf Z[\sqrt{14}]$. Reducing all terms to the same denominator, an element can ultimately be written as $$\frac{a+b\sqrt{14}}{2^n} \quad (a,b\in\mathbf Z).$$</p> <p>For readers aware of <em>localisations</em>, it also may be described as the ring of fractions of $\mathbf Z[\sqrt{14}]$ w.r.t. the multiplicative subset of powers of $\frac12$.</p> Answer 24: <p>If $a,b \in \mathbb R$, then $\mathbb{Z}[a,b]$ is the smallest ring that contains $a,b$; it is the same as the set of all polynomial expressions in $a,b$ with coefficients in $\mathbb{Z}$.</p> <p>When $a=\sqrt{14}$ and $b=1/2$, all powers of $a$ reduce to an integer or to an integer times $a$. There is no reduction for powers of $b$, but all fractions can be reduced to the same denominator.</p> <p>Therefore, a typical element of $\mathbb{Z}\left[\sqrt{14}, \frac{1}{2}\right]$ is of the form $\dfrac{u+v\sqrt{14}}{2^n}$, for $u,v \in \mathbb Z$ and $n \in \mathbb N$.</p>
If $|f|+|g|$ is constant on $D,$ prove that holomorphic functions $f,~g$ are constant on $D$.
2,082,670
6
https://math.stackexchange.com/questions/2082670/if-fg-is-constant-on-d-prove-that-holomorphic-functions-f-g-are-con
<blockquote> <p>Let $D\subseteq \mathbb{C}$ be open and connected and $f,~g:D \rightarrow \mathbb{C}$ holomorphic functions such that $|f|+|g|$ is constant on $D.$ Prove that $f,~g$ are constant on $D.$</p> </blockquote> <p><strong>Attempt.</strong> I noticed first that this problem is already set before, and is considered to be duplicate (<a href="https://math.stackexchange.com/questions/1076618/if-fg-is-constant-then-each-of-f-g-is-constant">If $|f|+|g|$ is constant then each of $f, g$ is constant</a>) and we are sent directly to problem <a href="https://math.stackexchange.com/questions/193808/sum-of-holomorphic-functions">sum of holomorphic functions</a>. However, the last reference deals with the sum $|f|^2+|g|^2$, which I do not see how is connected to the sum $|f|+|g|$, as stated in our title (for example, the equality $|f|^2+|g|^2=(|f|+|g|)^2-2|f||g|$ is not helpful here).</p> <p>Thanks in advance!</p>
[ { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" }, { "answer_id": 2084260, "score": 2, "text": "<p>Here is a proof that relies heavily on the open mapping theorem.</p>\n<p>Since the zeros of <span class=\"math-container\">$f,g,f',g'$</span> are isolated, we can find some <span class=\"math-container\">$z_0$</span> such that <span class=\"math-container\">$f,g,f',g'$</span>\nare non zero in a neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>Suppose <span class=\"math-container\">$z$</span> lies in this neighbourhood of <span class=\"math-container\">$z_0$</span>.</p>\n<p>If we let <span class=\"math-container\">$\\phi(t) = {1 \\over 2}|f(z+th)|$</span>, it is straightforward to show that\n<span class=\"math-container\">$\\phi'(0) = { \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|}$</span>, hence since <span class=\"math-container\">$|f(z)| + |g(z)|$</span> is constant\nwe obtain\n<span class=\"math-container\">\\begin{eqnarray}\n{ \\operatorname{re}(\\overline{f(z)} f'(z) h ) \\over |f(z)|} + { \\operatorname{re}(\\overline{g(z)} g'(z) h ) \\over |g(z)|} &amp;=&amp; \\operatorname{re} \\left[ \\biggl( { \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} \\biggr) h\\right] \\\\\n&amp;=&amp; 0\n\\end{eqnarray}</span>\nfor all <span class=\"math-container\">$h$</span>.</p>\n<p>Hence, we have <span class=\"math-container\">${ \\overline{f(z)} f'(z) \\over |f(z)|} + { \\overline{g(z)} g'(z) \\over |g(z)|} = 0$</span> and so\n<span class=\"math-container\">${f'(z) \\over g'(z)} = -{|f(z)| \\over \\overline{f(z)} } {\\overline{g(z)} \\over |g(z)|}$</span>, in particular, <span class=\"math-container\">$| {f'(z) \\over g'(z)}| = 1$</span>. Hence\n<span class=\"math-container\">${f'(z) \\over g'(z)} = c$</span> for some constant and so\n<span class=\"math-container\">${g(z) \\over f(z)} = - \\overline{c} | {g(z) \\over f(z)} | $</span>. Hence the values <span class=\"math-container\">$z \\mapsto {g(z) \\over f(z)}$</span> takes lie on the line <span class=\"math-container\">$\\{t \\overline{c} \\}_{t \\in \\mathbb{R}}$</span> and hence we must have <span class=\"math-container\">${g(z) \\over f(z)} = d$</span>, another constant. Hence <span class=\"math-container\">$(1+|d|)|f(z)|$</span> is a constant and so <span class=\"math-container\">$f$</span> is a constant.</p>\n" }, { "answer_id": 4173280, "score": 2, "text": "<p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p>\n<p>First let's enumerate our equation\n<span class=\"math-container\">\\begin{eqnarray}\n\\big|f\\big| + \\big|g\\big| = c. \\quad(1)\n\\end{eqnarray}</span></p>\n<p>It's easy to see that f,g can't be zero on <span class=\"math-container\">$D$</span>, otherwise if for example there was <span class=\"math-container\">$z_0\\in D$</span> such as <span class=\"math-container\">$f(z_0)= 0$</span> then you have that\n<span class=\"math-container\">$$\\big|g(z_0)\\big| = c$$</span>\nbut you also have that\n<span class=\"math-container\">$$\\big|g(z)\\big|\\leq c \\quad \\forall z\\in D$$</span>\nso from maximum modulus principal you have that <span class=\"math-container\">$g=c$</span> on <span class=\"math-container\">$D$</span> so (1) gives us that <span class=\"math-container\">$f = 0$</span> on <span class=\"math-container\">$D$</span>.\nSo we have that there exists some holomorphic functions on <span class=\"math-container\">$D$</span>, <span class=\"math-container\">$f_1,g_1$</span> such us\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1^2=f \\\\\n&amp;g_1^2=g\n\\end{eqnarray}</span>\n(Theorem 13.11 page 274 from Real and Complex Analysis by Rudin).\nNow you have what you were looking for, you have that\n<span class=\"math-container\">$$\\big|f_1\\big|^2+\\big|g_1|^2 = c \\quad (2)$$</span>\nand now all you need is to prove is that <span class=\"math-container\">$f_1,g_1$</span> are constant.\nNow you can continue as that link you mention, but there is one other way to do it.\nWe have that <span class=\"math-container\">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;f_1(z) = \\sum_{n=0}^{\\infty}a_nz^n \\\\\n&amp;g_1(z) = \\sum_{n=0}^{\\infty}b_nz^n\n\\end{eqnarray}</span>\nso if we integrate on some circle <span class=\"math-container\">$\\gamma$</span> with centre <span class=\"math-container\">$0$</span> and radious <span class=\"math-container\">$r$</span>\n(I suppose that <span class=\"math-container\">$\\;D = D(0,R)\\;$</span> and <span class=\"math-container\">$\\;0&lt;r&lt;R\\;$</span>)\nequation (2), Parseval's formula gives us that\n<span class=\"math-container\">\\begin{eqnarray}\n&amp;\\frac{1}{2\\pi }\\int_{\\gamma}|f_1|^2dz + \\frac{1}{2\\pi }\\int_{\\gamma}|g_1|^2 dz =\\frac{1}{2\\pi}\\int_{\\gamma}cdz \\Longrightarrow \\\\\n&amp;c_1 = \\sum_{n=0}^{\\infty}|a_n|^2r^{2n}+\\sum_{n=0}^{\\infty}|b_n|^2r^{2n} = \\sum_{n=0}^{\\infty}\\Bigl(|a_n|^2+|b_n|^2\\Bigr)r^{2n} \\quad \\forall r\\in(0,R)\n\\end{eqnarray}</span>\nwhere <span class=\"math-container\">$\\; c_1\\in \\mathbb C\\;$</span> is a constant, so all <span class=\"math-container\">$a_n,b_n = 0$</span> for <span class=\"math-container\">$n\\geq 1$</span>, and you have the result.</p>\n" }, { "answer_id": 2082714, "score": 1, "text": "<p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p>\n\n<p>$\\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \\log f(z), G(z) = \\log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. \nDifferentiating : $$\\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \\partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p>\n\n<p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$\nFinally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>\n" } ]
Question: If $|f|+|g|$ is constant on $D,$ prove that holomorphic functions $f,~g$ are constant on $D$. Answer 1: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 2: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 3: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 4: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 5: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 6: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 7: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 8: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 9: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 10: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 11: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 12: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 13: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 14: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 15: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 16: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 17: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 18: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 19: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 20: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 21: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 22: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 23: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 24: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 25: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 26: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 27: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 28: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 29: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 30: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 31: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 32: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 33: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p> Answer 34: <p>Here is a proof that relies heavily on the open mapping theorem.</p> <p>Since the zeros of <span class="math-container">$f,g,f',g'$</span> are isolated, we can find some <span class="math-container">$z_0$</span> such that <span class="math-container">$f,g,f',g'$</span> are non zero in a neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>Suppose <span class="math-container">$z$</span> lies in this neighbourhood of <span class="math-container">$z_0$</span>.</p> <p>If we let <span class="math-container">$\phi(t) = {1 \over 2}|f(z+th)|$</span>, it is straightforward to show that <span class="math-container">$\phi'(0) = { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|}$</span>, hence since <span class="math-container">$|f(z)| + |g(z)|$</span> is constant we obtain <span class="math-container">\begin{eqnarray} { \operatorname{re}(\overline{f(z)} f'(z) h ) \over |f(z)|} + { \operatorname{re}(\overline{g(z)} g'(z) h ) \over |g(z)|} &amp;=&amp; \operatorname{re} \left[ \biggl( { \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} \biggr) h\right] \\ &amp;=&amp; 0 \end{eqnarray}</span> for all <span class="math-container">$h$</span>.</p> <p>Hence, we have <span class="math-container">${ \overline{f(z)} f'(z) \over |f(z)|} + { \overline{g(z)} g'(z) \over |g(z)|} = 0$</span> and so <span class="math-container">${f'(z) \over g'(z)} = -{|f(z)| \over \overline{f(z)} } {\overline{g(z)} \over |g(z)|}$</span>, in particular, <span class="math-container">$| {f'(z) \over g'(z)}| = 1$</span>. Hence <span class="math-container">${f'(z) \over g'(z)} = c$</span> for some constant and so <span class="math-container">${g(z) \over f(z)} = - \overline{c} | {g(z) \over f(z)} | $</span>. Hence the values <span class="math-container">$z \mapsto {g(z) \over f(z)}$</span> takes lie on the line <span class="math-container">$\{t \overline{c} \}_{t \in \mathbb{R}}$</span> and hence we must have <span class="math-container">${g(z) \over f(z)} = d$</span>, another constant. Hence <span class="math-container">$(1+|d|)|f(z)|$</span> is a constant and so <span class="math-container">$f$</span> is a constant.</p> Answer 35: <p>Here is an answer that will help you to solve the problem the way that you already tried to do it.</p> <p>First let's enumerate our equation <span class="math-container">\begin{eqnarray} \big|f\big| + \big|g\big| = c. \quad(1) \end{eqnarray}</span></p> <p>It's easy to see that f,g can't be zero on <span class="math-container">$D$</span>, otherwise if for example there was <span class="math-container">$z_0\in D$</span> such as <span class="math-container">$f(z_0)= 0$</span> then you have that <span class="math-container">$$\big|g(z_0)\big| = c$$</span> but you also have that <span class="math-container">$$\big|g(z)\big|\leq c \quad \forall z\in D$$</span> so from maximum modulus principal you have that <span class="math-container">$g=c$</span> on <span class="math-container">$D$</span> so (1) gives us that <span class="math-container">$f = 0$</span> on <span class="math-container">$D$</span>. So we have that there exists some holomorphic functions on <span class="math-container">$D$</span>, <span class="math-container">$f_1,g_1$</span> such us <span class="math-container">\begin{eqnarray} &amp;f_1^2=f \\ &amp;g_1^2=g \end{eqnarray}</span> (Theorem 13.11 page 274 from Real and Complex Analysis by Rudin). Now you have what you were looking for, you have that <span class="math-container">$$\big|f_1\big|^2+\big|g_1|^2 = c \quad (2)$$</span> and now all you need is to prove is that <span class="math-container">$f_1,g_1$</span> are constant. Now you can continue as that link you mention, but there is one other way to do it. We have that <span class="math-container">$f_1,g_1$</span> are holomorphic so they have a powerseries expansion <span class="math-container">\begin{eqnarray} &amp;f_1(z) = \sum_{n=0}^{\infty}a_nz^n \\ &amp;g_1(z) = \sum_{n=0}^{\infty}b_nz^n \end{eqnarray}</span> so if we integrate on some circle <span class="math-container">$\gamma$</span> with centre <span class="math-container">$0$</span> and radious <span class="math-container">$r$</span> (I suppose that <span class="math-container">$\;D = D(0,R)\;$</span> and <span class="math-container">$\;0&lt;r&lt;R\;$</span>) equation (2), Parseval's formula gives us that <span class="math-container">\begin{eqnarray} &amp;\frac{1}{2\pi }\int_{\gamma}|f_1|^2dz + \frac{1}{2\pi }\int_{\gamma}|g_1|^2 dz =\frac{1}{2\pi}\int_{\gamma}cdz \Longrightarrow \\ &amp;c_1 = \sum_{n=0}^{\infty}|a_n|^2r^{2n}+\sum_{n=0}^{\infty}|b_n|^2r^{2n} = \sum_{n=0}^{\infty}\Bigl(|a_n|^2+|b_n|^2\Bigr)r^{2n} \quad \forall r\in(0,R) \end{eqnarray}</span> where <span class="math-container">$\; c_1\in \mathbb C\;$</span> is a constant, so all <span class="math-container">$a_n,b_n = 0$</span> for <span class="math-container">$n\geq 1$</span>, and you have the result.</p> Answer 36: <p>By the maximum modulus principle $|f(z)|$ has no local minimum and maximum except at its zeros. So if $f(a)=0$ then $a$ is a local minimum of $|f(z)|$ so it is a local maximum of $|g(z)|$ and $g(a) = 0$, i.e. $f(z) = g(z)= 0$.</p> <p>$\implies$ If $f(z)$ is non-constant then $f(z),g(z)$ have no zeros, and we can look at the holomorphic functions $F(z) = \log f(z), G(z) = \log g(z)$ and $H (z) = e^{Re(F(z))}+e^{Re(G(z))}=C$. Differentiating : $$\partial_x H(z) = Re(F'(z)) e^{Re(F(z))}+Re(G'(z))e^{Re(G(z))}=0 $$ $$ \partial_y H(z) = Im(F'(z)) e^{Re(F(z))}+Im(G'(z))e^{Re(G(z))}=0$$</p> <p>so that $$0= F'(z)e^{Re(F)}+G'(z)e^{Re(G(z))}=(F'(z)-G'(z))e^{Re(F(z))}+G'(z)(e^{Re(F(z))}+e^{Re(G(z))})$$ $$ = (F'(z)-G'(z))e^{Re(F(z))}+C G'(z)$$ Finally $(G'(z)-F'(z))e^{Re(F(z))}=C G'(z)$ means that $(G'(z)-F'(z))e^{Re(F(z))}$ and $e^{Re(F(z))}$ is holomorphic, i.e. $Re(F(z))$ and $F(z),f(z)$ are constant.</p>
Integration by parts with limits
2,082,787
6
https://math.stackexchange.com/questions/2082787/integration-by-parts-with-limits
<p>$$ \int{u\frac{dv}{dx}} \equiv uv -\int{v\frac{du}{dx}}$$ I prefer to write as: $ \int{uv} \equiv u\int{v} -\int{(\int{v})\frac{du}{dx}}$</p> <p>I get how this works, $\int{x\ln{x} = \ln{x}\cdot\frac{x^2}{2} -\frac{x^2}{4}}$</p> <p>However, when limits are applied, I don't seem to be able to get the right answer, doing:</p> <p>$$\int_1^2{x\ln{x}} = \ln{x}\bigg[\frac{x^2}{2}\bigg]^2_1-\int^2_1{\bigg(\bigg[\frac{x^2}{2}\bigg]^2_1\frac{1}{x})} = \\ \int_1^2{x\ln{x}} = \bigg[\frac{\ln{x}\cdot x^2}{2}\bigg]^2_1-\int^2_1{\bigg(\bigg[\frac{x}{2}\bigg]^2_1\bigg)} = \\ \ln{4} - \cdots$$</p> <p>How is the second part resolved? $- \int^2_1{\frac12}$??</p> <p>$= - [\frac x2]^2_1 = -\frac12; \ln{4} - \frac12$ isn't the right answer.</p>
[ { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" }, { "answer_id": 2082797, "score": 4, "text": "<p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p>\n\n<p>Remember that integration by parts comes from the product rule:\n$$ uv + C = \\int (uv)' = \\int u'v+\\int uv', $$\nwhere $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes\n$$ [uv]_a^b = \\int_a^b u'v + \\int_a^b uv' $$</p>\n\n<p>Therefore to evaluate a definite integral $ \\int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that\n$$ \\int_a^b fg = \\int_a^b F'g = [Fg]_a^b - \\int_a^b Fg'. $$\nIn particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\\int f$, and does not involve putting the limits in.</p>\n\n<p>The moral of the story is that the computation you want is done as follows:\n$$\\begin{align}\n \\int_1^2 x\\log{x} \\, dx &amp;= \\left[\\frac{1}{2}x^2\\log{x} \\right]_1^2 - \\int_1^2 \\frac{1}{2}x^2 \\frac{1}{x} \\, dx \\\\\n &amp;= \\frac{2^2\\log{2}}{2}-\\frac{1^1\\log{1}}{2} - \\frac{1}{2}\\int_1^2 x \\, dx \\\\\n &amp;= 2\\log{2}-0 - \\left[ \\frac{1}{4} x^2 \\right]_1^2 \\\\\n &amp;= \\log{4} - \\frac{3}{4}.\n\\end{align}$$</p>\n" }, { "answer_id": 2082795, "score": 2, "text": "<p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p>\n" }, { "answer_id": 2082805, "score": 1, "text": "<p>Let's try it the way other people do it:</p>\n\n<p>$$\n \\int u\\frac{dv}{dx}\\,dx = uv -\\int v\\frac{du}{dx}\\,dx,\n$$\nletting $u=\\ln x$ and $v = \\frac{x^2}2,$ gives us\n\\begin{align}\n \\int (\\ln x)(x)\\,dx \n &amp;= (\\ln x)\\left(\\frac{x^2}2\\right) \n - \\int \\left(\\frac{x^2}2\\right)\\left(\\frac 1x\\right)\\,dx \\\\\n &amp;= \\frac{x^2 \\ln x}2 - \\int \\frac x2\\,dx\n\\end{align}\nIntegrated over the interval $x\\in[1,2]$ this becomes\n\\begin{align}\n \\int_1^2 x \\ln x\\,dx \n &amp;= \\left[\\frac{x^2 \\ln x}2\\right]_1^2 - \\int_1^2 \\frac x2\\,dx \\\\\n &amp;= \\left(\\frac{2^2 \\ln 2}2 - \\frac{1^2 \\ln 1}2\\right)\n - \\left(\\frac{2^2}4 - \\frac{1^2}4\\right) \\\\\n &amp;= (2 \\ln 2 - 0) - \\left(1 - \\frac14\\right) \\\\\n &amp;= \\ln 4 - \\frac 34.\n\\end{align}</p>\n\n<p>Your way is to write $\\int v$ where other people write $v,$\nso you write $v$ where they write $\\frac{dv}{dx}.$\nSo far so good, but remember that in the two places where you\nwrite $\\int v$ on the right-hand side, you must substitute the\nsame antiderivative of $v$ in each place--that is, in the rightmost\nintegral you must multiply $\\frac{du}{dx}$ by the same function of\n$x$ that you multiply with $u$ in the first term on the right.\nYou do <em>not</em> multiply $\\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p>\n\n<p>In short, when you make your substitutions for $u$ and $v$\nthe results should be just like the ones above.</p>\n\n<hr>\n\n<p>While it was a mistake for you to write \n$\\left[\\frac{x^2}2\\right]_1^2$ where you should have written just\n$\\frac{x^2}2$ in the integral on the right-hand side, you made a\nsecond serious error on top of that:\n$$\n\\left[\\frac{x^2}2\\right]_1^2 \\frac1x \n= \\left(2 - \\frac12\\right) \\cdot \\frac1x = \\frac3{2x},\n$$\nbut\n$$\n\\left[\\frac x2\\right]_1^2 = 1 - \\frac12 = \\frac12.\n$$\nThe expression $\\left[\\frac x2\\right]_1^2$ is <em>not</em> a legitimate\nsubstitute for $\\left[\\frac{x^2}2\\right]_1^2 \\frac1x$\nunless you happen to be evaluating both in an environment where $x$\nis a constant $3$ outside the square brackets.\nOf course you only had the opportunity to make this mistake because\nyou put \"integration bounds\" in a place where they had no business to be\nin the first place, but you should understand why these two expressions\nare so fundamentally different.</p>\n" } ]
Question: Integration by parts with limits Answer 1: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 2: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 3: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 4: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 5: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 6: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 7: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 8: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 9: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 10: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 11: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 12: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 13: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 14: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 15: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 16: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 17: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 18: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 19: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 20: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 21: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 22: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 23: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 24: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 25: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 26: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 27: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 28: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 29: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 30: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 31: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 32: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 33: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p> Answer 34: <p>There are two related but different operations you have to do for integration by parts when it's between limits: finding an antiderivative for one of the functions (and the derivative of the other, but that's not where your problem lies), and finding the value of the integral between the limits, which we normally (but not necessarily) do by finding a continuous antiderivative and plugging in the limits.</p> <p>Remember that integration by parts comes from the product rule: $$ uv + C = \int (uv)' = \int u'v+\int uv', $$ where $u'$ means $du/dx$ and so on, and we conventionally don't worry about the integration constant until later. If we now put limits on this, it becomes $$ [uv]_a^b = \int_a^b u'v + \int_a^b uv' $$</p> <p>Therefore to evaluate a definite integral $ \int_a^b fg $ using integration by parts, we need a function $F$ so that $F'=f$, i.e. an antiderivative of $f$, from which we find, using the previous displayed equation, that $$ \int_a^b fg = \int_a^b F'g = [Fg]_a^b - \int_a^b Fg'. $$ In particular, finding $F$ is the same as doing an indefinite integral, which you would write as $\int f$, and does not involve putting the limits in.</p> <p>The moral of the story is that the computation you want is done as follows: $$\begin{align} \int_1^2 x\log{x} \, dx &amp;= \left[\frac{1}{2}x^2\log{x} \right]_1^2 - \int_1^2 \frac{1}{2}x^2 \frac{1}{x} \, dx \\ &amp;= \frac{2^2\log{2}}{2}-\frac{1^1\log{1}}{2} - \frac{1}{2}\int_1^2 x \, dx \\ &amp;= 2\log{2}-0 - \left[ \frac{1}{4} x^2 \right]_1^2 \\ &amp;= \log{4} - \frac{3}{4}. \end{align}$$</p> Answer 35: <p>If you prefer to write your way, it's fine as long as you keep in mind that, you're supposed to plug in the upper and lower limits right at the very end, once you're done with evaluating the integrals.</p> Answer 36: <p>Let's try it the way other people do it:</p> <p>$$ \int u\frac{dv}{dx}\,dx = uv -\int v\frac{du}{dx}\,dx, $$ letting $u=\ln x$ and $v = \frac{x^2}2,$ gives us \begin{align} \int (\ln x)(x)\,dx &amp;= (\ln x)\left(\frac{x^2}2\right) - \int \left(\frac{x^2}2\right)\left(\frac 1x\right)\,dx \\ &amp;= \frac{x^2 \ln x}2 - \int \frac x2\,dx \end{align} Integrated over the interval $x\in[1,2]$ this becomes \begin{align} \int_1^2 x \ln x\,dx &amp;= \left[\frac{x^2 \ln x}2\right]_1^2 - \int_1^2 \frac x2\,dx \\ &amp;= \left(\frac{2^2 \ln 2}2 - \frac{1^2 \ln 1}2\right) - \left(\frac{2^2}4 - \frac{1^2}4\right) \\ &amp;= (2 \ln 2 - 0) - \left(1 - \frac14\right) \\ &amp;= \ln 4 - \frac 34. \end{align}</p> <p>Your way is to write $\int v$ where other people write $v,$ so you write $v$ where they write $\frac{dv}{dx}.$ So far so good, but remember that in the two places where you write $\int v$ on the right-hand side, you must substitute the same antiderivative of $v$ in each place--that is, in the rightmost integral you must multiply $\frac{du}{dx}$ by the same function of $x$ that you multiply with $u$ in the first term on the right. You do <em>not</em> multiply $\frac{du}{dx}$ by a definite integral with given bounds, because that is a constant, <em>not</em> the desired function of $x.$</p> <p>In short, when you make your substitutions for $u$ and $v$ the results should be just like the ones above.</p> <hr> <p>While it was a mistake for you to write $\left[\frac{x^2}2\right]_1^2$ where you should have written just $\frac{x^2}2$ in the integral on the right-hand side, you made a second serious error on top of that: $$ \left[\frac{x^2}2\right]_1^2 \frac1x = \left(2 - \frac12\right) \cdot \frac1x = \frac3{2x}, $$ but $$ \left[\frac x2\right]_1^2 = 1 - \frac12 = \frac12. $$ The expression $\left[\frac x2\right]_1^2$ is <em>not</em> a legitimate substitute for $\left[\frac{x^2}2\right]_1^2 \frac1x$ unless you happen to be evaluating both in an environment where $x$ is a constant $3$ outside the square brackets. Of course you only had the opportunity to make this mistake because you put "integration bounds" in a place where they had no business to be in the first place, but you should understand why these two expressions are so fundamentally different.</p>
Prove that $(2+ \sqrt5)^{\frac13} + (2- \sqrt5)^{\frac13}$ is an integer
2,082,836
6
https://math.stackexchange.com/questions/2082836/prove-that-2-sqrt5-frac13-2-sqrt5-frac13-is-an-integer
<p>When checked in calculator it is 1. But how to prove it? Also it is not a normal addition like $x+ \frac1x$ which needs direct rationalization. So I just need something to proceed.</p>
[ { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" }, { "answer_id": 2082838, "score": 12, "text": "<p>Since $$(1 - \\sqrt{5})^3 = 16 - 8 \\sqrt{5}$$ and similarly $$(1 + \\sqrt{5})^3 = 16 + 8 \\sqrt{5}$$ it follows that $$(2 + \\sqrt{5})^{1/3} + (2 - \\sqrt{5})^{1/3} = \\left(\\frac {16+8\\sqrt{5}} 8 \\right)^{1/3} +\\left(\\frac {16-8\\sqrt{5}} 8 \\right)^{1/3} = \\frac{1 + \\sqrt{5}}{2} + \\frac{1 - \\sqrt{5}}{2} = 1.$$</p>\n" }, { "answer_id": 2082839, "score": 5, "text": "<p>Use the general formula $x^3+y^3=(x+y)\\left((x+y)^2-3xy\\right)$.</p>\n\n<p>Here $x=\\sqrt[3]{2+\\sqrt{5}}$ and $y=\\sqrt[3]{2-\\sqrt{5}}$.</p>\n\n<p>Let $a=\\sqrt[3]{2+\\sqrt{5}}+\\sqrt[3]{2-\\sqrt{5}}$ and</p>\n\n<p>$$b=\\sqrt[3]{2+\\sqrt{5}}\\sqrt[3]{2-\\sqrt{5}}=\\sqrt[3]{2^2-(\\sqrt{5})^2}=-1$$</p>\n\n<p>$$(2+\\sqrt{5})+(2-\\sqrt{5})=4$$</p>\n\n<p>$$=a\\left(a^2-3b\\right)=a^3+3a$$</p>\n\n<p>$$a^3+3a-4=0$$</p>\n\n<p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p>\n\n<p>$$(a-1)\\left(a^2+a+4\\right)=0$$</p>\n\n<p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p>\n" }, { "answer_id": 2082884, "score": 2, "text": "<p>We have $x- \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}} - \\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}} = 0$</p>\n\n<p>So $x^3 -(2-\\sqrt 5)-(2+\\sqrt 5) = 3x \\left(2+\\sqrt 5 \\right)^{\\frac{1}{3}}\\left(2-\\sqrt 5 \\right)^{\\frac{1}{3}}= -3x$</p>\n\n<p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p>\n\n<p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p>\n\n<p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p>\n" }, { "answer_id": 2082843, "score": 1, "text": "<p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. \n(Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. \nusing <a href=\"https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method\" rel=\"nofollow noreferrer\">Cardano's method</a> for solving the cubic equation.) </p>\n\n<p>Letting $w = e^{\\frac {2i \\pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation \n$x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p>\n\n<p>There are thus $9$ possible values of $(2+\\sqrt{5})^{1/3} + (2-\\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p>\n" }, { "answer_id": 2082858, "score": 1, "text": "<p>An easy way is to simplify the nested radical, and then combine like terms.</p>\n\n<p>An easy way to denest $\\sqrt[m]{A+B\\sqrt[n]{C}}$ is to assume the form $a+b\\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\\sqrt[3]{2+\\sqrt5}$ becomes$$2+\\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\\sqrt5\\tag1$$\nFrom which we get a system of equations\\begin{align*} &amp; a^3+15ab^2=2\\tag2\\\\ &amp; 3a^2b+5b^3=1\\tag3\\end{align*}\nCross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\\\a^3-6a^2b+15ab^2-10b^3=0\\tag4$$\nDividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\\implies x=1$$\nTherefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\\implies 16b^3=2\\implies a=b=\\dfrac 12$$\nHence, $\\sqrt[3]{2+\\sqrt5}=\\dfrac 12+\\dfrac {\\sqrt5} 2$ and similarly, $\\sqrt[3]{2-\\sqrt5}=\\dfrac 12-\\dfrac {\\sqrt5}2$.</p>\n\n<p>Therefore, we have$$\\sqrt[3]{2+\\sqrt5}+\\sqrt[3]{2-\\sqrt5}=\\dfrac 12+\\dfrac 12+\\dfrac {\\sqrt5}2-\\dfrac {\\sqrt5}2=\\boxed{1}\\tag5$$</p>\n" }, { "answer_id": 2082891, "score": 1, "text": "<p>Take the cube of the expression and expand it. You get\n\\begin{multline}\n\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)^3\n\\\\= 2 + \\sqrt{5} + 2 - \\sqrt{5} + 3\\sqrt[3]{(2 + \\sqrt{5})(2 - \\sqrt{5})}\\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\\\ = 4 - 3 \\left(\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}\\right)\n\\end{multline}\nThus, $\\sqrt[3]{2 + \\sqrt{5}} + \\sqrt[3]{2 - \\sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p>\n" }, { "answer_id": 2082900, "score": 0, "text": "<p>Let $x=(2+\\sqrt 5)^{1/3}$ and $y=(2-\\sqrt 5)^{1/3} .$</p>\n\n<p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\\in \\mathbb Z \\iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p>\n\n<p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\\in \\mathbb Z \\iff 1^3+3\\cdot 1=4.$$</p>\n\n<p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>\n" } ]
Question: Prove that $(2+ \sqrt5)^{\frac13} + (2- \sqrt5)^{\frac13}$ is an integer Answer 1: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 2: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 3: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 4: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 5: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 6: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 7: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 8: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 9: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 10: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 11: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 12: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 13: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 14: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 15: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 16: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 17: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 18: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 19: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 20: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 21: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 22: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 23: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 24: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 25: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 26: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 27: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 28: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 29: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 30: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 31: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 32: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 33: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 34: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 35: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 36: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 37: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 38: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 39: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 40: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 41: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 42: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 43: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 44: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 45: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 46: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 47: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 48: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 49: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 50: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 51: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 52: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 53: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 54: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 55: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 56: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 57: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 58: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 59: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 60: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 61: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 62: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 63: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 64: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 65: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 66: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 67: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 68: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 69: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 70: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 71: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 72: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 73: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 74: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 75: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 76: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 77: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p> Answer 78: <p>Since $$(1 - \sqrt{5})^3 = 16 - 8 \sqrt{5}$$ and similarly $$(1 + \sqrt{5})^3 = 16 + 8 \sqrt{5}$$ it follows that $$(2 + \sqrt{5})^{1/3} + (2 - \sqrt{5})^{1/3} = \left(\frac {16+8\sqrt{5}} 8 \right)^{1/3} +\left(\frac {16-8\sqrt{5}} 8 \right)^{1/3} = \frac{1 + \sqrt{5}}{2} + \frac{1 - \sqrt{5}}{2} = 1.$$</p> Answer 79: <p>Use the general formula $x^3+y^3=(x+y)\left((x+y)^2-3xy\right)$.</p> <p>Here $x=\sqrt[3]{2+\sqrt{5}}$ and $y=\sqrt[3]{2-\sqrt{5}}$.</p> <p>Let $a=\sqrt[3]{2+\sqrt{5}}+\sqrt[3]{2-\sqrt{5}}$ and</p> <p>$$b=\sqrt[3]{2+\sqrt{5}}\sqrt[3]{2-\sqrt{5}}=\sqrt[3]{2^2-(\sqrt{5})^2}=-1$$</p> <p>$$(2+\sqrt{5})+(2-\sqrt{5})=4$$</p> <p>$$=a\left(a^2-3b\right)=a^3+3a$$</p> <p>$$a^3+3a-4=0$$</p> <p>You can apply Rational Root Theorem/Test, polynomial division, fundamental theorem of algebra, etc.</p> <p>$$(a-1)\left(a^2+a+4\right)=0$$</p> <p>$a^2+a+4=0$ has no real solutions, but clearly $a$ is real, so $a=1$. </p> Answer 80: <p>We have $x- \left(2+\sqrt 5 \right)^{\frac{1}{3}} - \left(2-\sqrt 5 \right)^{\frac{1}{3}} = 0$</p> <p>So $x^3 -(2-\sqrt 5)-(2+\sqrt 5) = 3x \left(2+\sqrt 5 \right)^{\frac{1}{3}}\left(2-\sqrt 5 \right)^{\frac{1}{3}}= -3x$</p> <p>(invoking that $a^3+b^3+c^3 = 3abc$ when $a+b+c = 0$)</p> <p>from which we see that $x$ is a root of $x^3+3x-4 =0$ </p> <p>Since the derivative is positive, this means it has only one real root, which by inspection is $x=1$</p> Answer 81: <p>If $x = a+b$, then $x^3 = (3ab)x + (a^3+b^3)$. (Conversely, if $x^3 - px - q = 0 $ then we may find $a,b$, such that $3ab = p, a^3+b^3=q$. using <a href="https://en.m.wikipedia.org/wiki/Cubic_equation#Cardano.27s_method" rel="nofollow noreferrer">Cardano's method</a> for solving the cubic equation.) </p> <p>Letting $w = e^{\frac {2i \pi}{3}} $ denote a cube root of unity, notice that the numbers $a+b, wa + w^2b$, and $w^2a + wb$ all satisfy the same cubic equation, $x^3 = 3abx + (a^3+b^3)$, and in general the solutions to the equation $x^3 = 3w^iabx + (a^3+b^3) $ are given by $x = w^ja + w^{i-j}b$, for $j=0,1,2$. </p> <p>There are thus $9$ possible values of $(2+\sqrt{5})^{1/3} + (2-\sqrt{5})^{1/3}$, depending on which cube roots you take, and they each satisfy one of $3$ cubic equations. But only one of these values, namely $1$, is real, let alone rational. Hope it helps. </p> Answer 82: <p>An easy way is to simplify the nested radical, and then combine like terms.</p> <p>An easy way to denest $\sqrt[m]{A+B\sqrt[n]{C}}$ is to assume the form $a+b\sqrt[n]{C}$ and expand both sides with the binomial theorem. $\sqrt[3]{2+\sqrt5}$ becomes$$2+\sqrt5=(a^3+15ab^2)+(3a^2b+5b^3)\sqrt5\tag1$$ From which we get a system of equations\begin{align*} &amp; a^3+15ab^2=2\tag2\\ &amp; 3a^2b+5b^3=1\tag3\end{align*} Cross multiplying, we get$$a^3+15ab^2=6a^2b+10b^3\\a^3-6a^2b+15ab^2-10b^3=0\tag4$$ Dividing the last equation of $(4)$ by $b^3$ and substituting $x=a/b$ gives the cubic$$x^3-6x^2+15x-10=0\implies x=1$$ Therefore, $a=b$ and plugging that back into the system, we get$$a^3+15ab^2=2\implies 16b^3=2\implies a=b=\dfrac 12$$ Hence, $\sqrt[3]{2+\sqrt5}=\dfrac 12+\dfrac {\sqrt5} 2$ and similarly, $\sqrt[3]{2-\sqrt5}=\dfrac 12-\dfrac {\sqrt5}2$.</p> <p>Therefore, we have$$\sqrt[3]{2+\sqrt5}+\sqrt[3]{2-\sqrt5}=\dfrac 12+\dfrac 12+\dfrac {\sqrt5}2-\dfrac {\sqrt5}2=\boxed{1}\tag5$$</p> Answer 83: <p>Take the cube of the expression and expand it. You get \begin{multline} \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)^3 \\= 2 + \sqrt{5} + 2 - \sqrt{5} + 3\sqrt[3]{(2 + \sqrt{5})(2 - \sqrt{5})}\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \\ = 4 - 3 \left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right) \end{multline} Thus, $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ solves $x^3 = 4 - 3x$. It's not too difficult to find that $1$ is a solution to this equation.</p> Answer 84: <p>Let $x=(2+\sqrt 5)^{1/3}$ and $y=(2-\sqrt 5)^{1/3} .$</p> <p>We have $5&gt;x^3&gt;4$ and $0&gt;y^3&gt;-1$ so $2&gt;x&gt;1$ and $0&gt;y&gt;-1,$ so $2&gt;x+y&gt;0.$ Therefore $$x+y\in \mathbb Z \iff x+y=1.$$ We have $xy= (x^3y^3)^{1/3}=(-1)^{1/3}=-1$ and $x^3+y^3=4.$ So we have $$(x+y)^3=x^3+y^3+3xy(x+y)=4-3(x+y).$$ So $x+y$ is a positive solution to the equation $z^3+3z=4.$ The function $f(z)=z^3+3z$ is strictly increasing for $z&gt;0$ (As it is the sum of the strictly increasing functions $z^3$ and $3z$.)</p> <p>So $f(z)=4$ has at most one positive solution, which, if it exists, is equal to $x+y.$ Therefore $$x+y\in \mathbb Z \iff 1^3+3\cdot 1=4.$$</p> <p>For insight on why this kind of thing happens, see Cardano's Method for solving cubic equations. </p>
Maximal ideal of $\mathbb{Q}[x_1,...,x_n]$ is contained in only finitely maximal ideals of $\mathbb{C}[x_1,...,x_n]$
2,082,882
6
https://math.stackexchange.com/questions/2082882/maximal-ideal-of-mathbbqx-1-x-n-is-contained-in-only-finitely-maximal
<p>Suppose $M\subset \mathbb{Q}[x_1,...,x_n]$ is maximal. I'd like to show that as a subset of $R= \mathbb{C}[x_1,...,x_n]$, $M$ is contained in only finitely many maximal ideals. By Hilbert-Nullstellensatz, I know max ideals of $R$ are of the form $(x_1-a_1,...,x_n-a_n)$, so perhaps if $M$ were contained infinitely many of them, one could obtain a contradiction? I'm not quite sure where to go with this.</p>
[ { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" }, { "answer_id": 2083382, "score": 7, "text": "<p>We can actually describe the maximal ideals containing $M$. Let me outline a more \"geometric\" approach (which might be useful if you are interested in algebraic geometry):</p>\n\n<ul>\n<li>The quotient $\\mathbf Q[x_1, \\dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\\mathbf Q$ (by Zariski's lemma). Let $\\alpha_i$ be the image of $x_i$ under this identification.</li>\n<li>Similarly, for a maximal ideal $N = (x_1 - \\beta_1,\\dots,x_n - \\beta_n)$ of $\\mathbf C[x_1, \\dots, x_n]$, the quotient $\\mathbf C[x_1, \\dots, x_n]/N$ is identified with $\\mathbf C$ via the isomorphism $$b_N: \\mathbf C[x_1, \\dots, x_n]/N \\rightarrow \\mathbf C$$ given by $b_N(x_i) = \\beta_i$. </li>\n<li>An inclusion $M \\subset N$ induces an embedding $$\\varphi: \\mathbf Q[x_1, \\dots, x_n]/M \\rightarrow \\mathbf C[x_1, \\dots, x_n]/N$$ of $L$ into $ \\mathbf C.$ </li>\n<li>Conversely, for an embedding $$\\sigma: L \\rightarrow \\mathbf C,$$ the maximal ideal $$(x_1 - \\sigma(\\alpha_1), \\dots, x_n - \\sigma(\\alpha_n) )$$ of $ \\mathbf C[x_1, \\dots, x_n]$ contains $M$.</li>\n</ul>\n\n<p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\\mathbf C[x_1, \\dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p>\n\n<hr>\n\n<p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\\mathbf A^n_\\mathbf C \\rightarrow \\mathbf A^n_\\mathbf Q $$ sends a closed point $p$ of $\\mathbf A^n_\\mathbf C$ to its Galois orbit $G_\\mathbf Q p$ regarded as a closed point of $\\mathbf A^n_\\mathbf Q$. The fiber of $G_\\mathbf Q p$ consists the points $\\sigma(p)$, for $\\sigma \\in G_\\mathbf Q = \\operatorname{Gal} (\\overline{\\mathbf Q}/\\mathbf Q)$.</p>\n" }, { "answer_id": 2082953, "score": 3, "text": "<p>If I haven't made a mistake, the result is true if $\\mathbb{Q} \\subseteq \\mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p>\n\n<p>Let $A = \\mathbb{Q}[X_1, ... , X_n], B = \\mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p>\n\n<p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \\subset \\mathfrak p_1 \\subset \\cdots \\subset \\mathfrak p_{n-1} \\subset M$. </p>\n\n<p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \\cap A \\supseteq MB \\cap A \\supseteq M$, which implies $P \\cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \\supset \\cdots \\supset P_1$, contained in $P$, such that $P_i \\cap A = \\mathfrak p_i$.</p>\n\n<p>Now we have a chain of $n$ inclusions of prime ideals $0 \\subset P_1 \\subset \\cdots \\subset P_{n-1} \\subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p>\n\n<p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p>\n" }, { "answer_id": 2084277, "score": 3, "text": "<p>Here's another approach. By <a href=\"https://en.wikipedia.org/wiki/Zariski%27s_lemma\" rel=\"nofollow noreferrer\">Zariski's lemma</a>, the quotient $L=\\mathbb{Q}[x_1,\\dots,x_n]/M$ is a finite extension of $\\mathbb{Q}$. Now tensoring the exact sequence $$M\\to\\mathbb{Q}[x_1,\\dots,x_n]\\to L\\to 0$$ with $\\mathbb{C}$ over $\\mathbb{Q}$ gives an exact sequence $$M\\otimes\\mathbb{C}\\to \\mathbb{C}[x_1,\\dots,x_n]\\to L\\otimes\\mathbb{C}\\to 0.$$ The image of the first map is just the ideal in $\\mathbb{C}[x_1,\\dots,x_n]$ generated by $M$, so this says that $L\\otimes\\mathbb{C}$ is the quotient of $\\mathbb{C}[x_1,\\dots,x_n]$ by the ideal generated by $M$.</p>\n\n<p>Now $L$ is finite-dimensional as a $\\mathbb{Q}$-vector space, and hence $L\\otimes\\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\\mathbb{C}$-vector space. In particular, this means $L\\otimes\\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\\otimes\\mathbb{C}$ are in bijection with maximal ideals of $\\mathbb{C}[x_1,\\dots,x_n]$ containing $M$, so we're done.</p>\n" } ]
Question: Maximal ideal of $\mathbb{Q}[x_1,...,x_n]$ is contained in only finitely maximal ideals of $\mathbb{C}[x_1,...,x_n]$ Answer 1: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 2: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 3: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 4: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 5: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 6: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 7: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 8: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 9: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 10: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 11: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 12: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 13: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 14: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 15: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 16: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 17: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 18: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 19: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 20: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 21: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 22: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 23: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 24: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 25: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 26: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 27: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 28: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 29: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 30: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 31: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 32: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 33: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p> Answer 34: <p>We can actually describe the maximal ideals containing $M$. Let me outline a more "geometric" approach (which might be useful if you are interested in algebraic geometry):</p> <ul> <li>The quotient $\mathbf Q[x_1, \dots, x_n]/M$ is a field, hence identified with a finite extension $L$ of $\mathbf Q$ (by Zariski's lemma). Let $\alpha_i$ be the image of $x_i$ under this identification.</li> <li>Similarly, for a maximal ideal $N = (x_1 - \beta_1,\dots,x_n - \beta_n)$ of $\mathbf C[x_1, \dots, x_n]$, the quotient $\mathbf C[x_1, \dots, x_n]/N$ is identified with $\mathbf C$ via the isomorphism $$b_N: \mathbf C[x_1, \dots, x_n]/N \rightarrow \mathbf C$$ given by $b_N(x_i) = \beta_i$. </li> <li>An inclusion $M \subset N$ induces an embedding $$\varphi: \mathbf Q[x_1, \dots, x_n]/M \rightarrow \mathbf C[x_1, \dots, x_n]/N$$ of $L$ into $ \mathbf C.$ </li> <li>Conversely, for an embedding $$\sigma: L \rightarrow \mathbf C,$$ the maximal ideal $$(x_1 - \sigma(\alpha_1), \dots, x_n - \sigma(\alpha_n) )$$ of $ \mathbf C[x_1, \dots, x_n]$ contains $M$.</li> </ul> <p>Therefore, there exist exactly $[L : K]$ maximal ideals in $\mathbf C[x_1, \dots, x_n]$ containing $M$ and they are the ones described in the last bullet above.</p> <hr> <p>Geometrically, in the language of schemes, this is what's going on. The base change morphism $$\mathbf A^n_\mathbf C \rightarrow \mathbf A^n_\mathbf Q $$ sends a closed point $p$ of $\mathbf A^n_\mathbf C$ to its Galois orbit $G_\mathbf Q p$ regarded as a closed point of $\mathbf A^n_\mathbf Q$. The fiber of $G_\mathbf Q p$ consists the points $\sigma(p)$, for $\sigma \in G_\mathbf Q = \operatorname{Gal} (\overline{\mathbf Q}/\mathbf Q)$.</p> Answer 35: <p>If I haven't made a mistake, the result is true if $\mathbb{Q} \subseteq \mathbb{C}$ is replaced by any field extension, so you don't need the Nullstellensatz. </p> <p>Let $A = \mathbb{Q}[X_1, ... , X_n], B = \mathbb{C}[X_1, ... , X_n]$. Then $B$ is a flat extension of $A$, and so the going-down property holds.</p> <p>Since $M$ is a maximal ideal, and $A$ is an $n$-dimensional ring, we know by the dimension formula for finitely generated algebras over a field that there is a chain of $n$ inclusions of prime ideals $0 \subset \mathfrak p_1 \subset \cdots \subset \mathfrak p_{n-1} \subset M$. </p> <p>Let $P$ be a prime ideal of $B$ which contains $MB$. Then $P \cap A \supseteq MB \cap A \supseteq M$, which implies $P \cap A = M$, because $M$ is a maximal ideal of $A$. By the going down property, there exist prime ideals $P_{n-1} \supset \cdots \supset P_1$, contained in $P$, such that $P_i \cap A = \mathfrak p_i$.</p> <p>Now we have a chain of $n$ inclusions of prime ideals $0 \subset P_1 \subset \cdots \subset P_{n-1} \subset P$. Since $B$ is also an $n$-dimensional ring, $P$ must be a maximal ideal.</p> <p>We have shown that every prime ideal of the quotient ring $B/MB$ is maximal. Hence $B/MB$ is a zero dimensional Noetherian ring. But this is the same as saying that $B/MB$ is artinian, and an artinian ring has only finitely many maximal ideals. So $MB$, and therefore $M$, is contained in only finitely many maximal ideals of $B$.</p> Answer 36: <p>Here's another approach. By <a href="https://en.wikipedia.org/wiki/Zariski%27s_lemma" rel="nofollow noreferrer">Zariski's lemma</a>, the quotient $L=\mathbb{Q}[x_1,\dots,x_n]/M$ is a finite extension of $\mathbb{Q}$. Now tensoring the exact sequence $$M\to\mathbb{Q}[x_1,\dots,x_n]\to L\to 0$$ with $\mathbb{C}$ over $\mathbb{Q}$ gives an exact sequence $$M\otimes\mathbb{C}\to \mathbb{C}[x_1,\dots,x_n]\to L\otimes\mathbb{C}\to 0.$$ The image of the first map is just the ideal in $\mathbb{C}[x_1,\dots,x_n]$ generated by $M$, so this says that $L\otimes\mathbb{C}$ is the quotient of $\mathbb{C}[x_1,\dots,x_n]$ by the ideal generated by $M$.</p> <p>Now $L$ is finite-dimensional as a $\mathbb{Q}$-vector space, and hence $L\otimes\mathbb{C}$ is finite-dimensional (of the same dimension) as a $\mathbb{C}$-vector space. In particular, this means $L\otimes\mathbb{C}$ is an artinian ring, and so it has only finitely many maximal ideals. But maximal ideals of $L\otimes\mathbb{C}$ are in bijection with maximal ideals of $\mathbb{C}[x_1,\dots,x_n]$ containing $M$, so we're done.</p>
Proof that the binomial transform is involution
2,082,924
6
https://math.stackexchange.com/questions/2082924/proof-that-the-binomial-transform-is-involution
<p>Let $\{x_n\}$, $n=0,1,2,\ldots$, be a sequence, and let $\{y_n\}$, $n=0,1,2,\ldots$, be its binomial transform, that is, $$ y_n=\sum_{k=0}^{n} (-1)^k {n\choose k} x_k. $$ I need to prove that the binomial transform is an involution, that is, $$ x_n=\sum_{k=0}^{n} (-1)^k {n\choose k} y_k. $$</p> <p>I tried to use the combinatorial Vandermonde's identity but I failed. Please help me to prove.</p>
[ { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" }, { "answer_id": 2082935, "score": 4, "text": "<p>If $g(t) = \\sum_{n=0}^\\infty x_n t^n$ is the generating function of $\\{x_n\\}$,\nthen the generating function of $\\{y_n\\}$ is\n$$ \\eqalign{h(t) &amp;= \\sum_{n=0}^\\infty \\sum_{k=0}^n (-1)^k {n \\choose k} x_k t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\sum_{n=k}^\\infty {n \\choose k} t^n\\cr\n&amp;= \\sum_{k=0}^\\infty (-1)^k x_k \\frac{t^k}{(1-t)^{k+1}}\n&amp;= \\dfrac{1}{1-t} g\\left(-\\frac{t}{1-t}\\right)}$$\nIf $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says\n$$ g(s) = (1-t) h(t) = \\dfrac{1}{1-s} h\\left(-\\frac{s}{1-s}\\right)$$\nThus the generating function of the binomial transform of $\\{y_n\\}$ is $g$ again.</p>\n" }, { "answer_id": 2083977, "score": 4, "text": "<p>Suppose we seek to prove that</p>\n\n<p>$$b_n = \\sum_{k=0}^n (-1)^k {n\\choose k}\n\\sum_{q=0}^k (-1)^q {k\\choose q} b_q.$$</p>\n\n<p>Reversing the two sums we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose k} {k\\choose q}.$$</p>\n\n<p>We have</p>\n\n<p>$${n\\choose k} {k\\choose q} =\n{n\\choose q} {n-q\\choose k-q}$$ </p>\n\n<p>and we obtain</p>\n\n<p>$$\\sum_{q=0}^n (-1)^q b_q \\sum_{k=q}^n\n(-1)^k {n\\choose q} {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n (-1)^q {n\\choose q} b_q \n\\sum_{k=q}^n (-1)^k {n-q\\choose k-q}\n\\\\ = \\sum_{q=0}^n {n\\choose q} b_q \n\\sum_{k=0}^{n-q} (-1)^k {n-q\\choose k}.$$</p>\n\n<p>Now the inner sum is $(-1)^0 {0\\choose 0} = 1$ when $q=n$\nand $(1-1)^{n-q} = 0$ otherwise. This leaves</p>\n\n<p>$${n\\choose n} b_n = b_n.$$</p>\n" }, { "answer_id": 2084434, "score": 3, "text": "<p>It is also convenient to show the involution property with the help of <em><a href=\"https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function\" rel=\"nofollow noreferrer\">exponential generating functions</a></em>.</p>\n\n<blockquote>\n <p>Let\n \\begin{align*}\nA(t)=\\sum_{n=0}^\\infty x_{n}\\frac{t^n}{n!} \\qquad\\text{and}\\qquad B(t)=\\sum_{n=0}^\\infty y_{n}\\frac{t^n}{n!}\n\\end{align*}\n the exponential generating functions of $\\{x_n\\}$ and $\\{y_n\\}$.</p>\n</blockquote>\n\n<p>Since\n\\begin{align*}\nA(-t)e^t&amp;=\\left(\\sum_{k=0}^\\infty x_{k}\\frac{(-t)^k}{k!}\\right)\\left(\\sum_{l=0}^\\infty \\frac{t^l}{l!}\\right)\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{{k+l=n}\\atop{k,l\\geq 0}}\\frac{(-1)^kx_k}{k!}\\cdot\\frac{1}{l!}\\right)t^n\\\\\n&amp;=\\sum_{n=0}^\\infty\\left(\\sum_{k=0}^n\\binom{n}{k}(-1)^kx_k\\right)\\frac{t^n}{n!}\\\\\n&amp;=\\sum_{n=0}^\\infty y_n\\frac{t^n}{n!}\\\\\n&amp;=B(t)\n\\end{align*}</p>\n\n<blockquote>\n <p>we obtain the following correspondence\n \\begin{align*}\nB(t)=A(-t)e^t\\qquad\\longleftrightarrow\\qquad y_n=\\sum_{k=0}^n\\binom{n}{k}(-1)^k x_{k}\\tag{1}\n\\end{align*}</p>\n</blockquote>\n\n<p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives\n\\begin{align*}\nA(t)=B(-t)e^{t}\n\\end{align*}</p>\n\n<blockquote>\n <p>According to (1) we conclude due to symmetry</p>\n \n <p>\\begin{align*}\nA(t)=B(-t)e^t\\qquad\\longleftrightarrow\\qquad x_n=\\sum_{k=0}^n \\binom{n}{k}(-1)^ky_{k}\n\\end{align*}\n and the claim follows since sequence and its binomial transform have the same generating function.</p>\n</blockquote>\n" }, { "answer_id": 2082937, "score": 2, "text": "<p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by\n$$\nL(t^n) = x_n \\text{ for } n = 0,1,2,\\ldots.\n$$\nThen\n$$\ny_n = \\sum_{k=0}^n (-1)^k \\binom n k x_k = \\sum_{k=0}^n (-1)^k \\binom n k L(t^k) = L\\left( \\sum_{k=0}^n \\binom n k (-t)^k \\right) = L((1-t)^n).\n$$\nAnd\n\\begin{align}\nx_n &amp; = L(t^n) = L((1-(1-t))^n) = L \\left( \\sum_{k=0}^n (-1)^k \\binom n k (1-t)^k \\right) \\\\[10pt]\n&amp; = \\sum_{k=0}^n (-1)^k \\binom n k L((1-t)^k) = \\sum_{k=0}^n (-1)^k \\binom n k y_k.\n\\end{align}\n(As far as I know, this argument is due to Gian-Carlo Rota.)</p>\n" }, { "answer_id": 4481356, "score": 0, "text": "<p>Consider the linear transformation of polynomials\n<span class=\"math-container\">$P(t) \\mapsto P(1-t)$</span>, clearly an involution\n<span class=\"math-container\">$$a_0 + a_1 X + a_2 X^2 + \\cdots \\mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \\cdots $$</span>\nso the map in terms of the coefficients is\n<span class=\"math-container\">$(a_i) \\mapsto (b_i)$</span>\nwhere\n<span class=\"math-container\">$$b_i = \\sum_{k\\ge i}(-1)^k \\binom{k}{i} a_k$$</span>\nWe see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>\n" } ]
Question: Proof that the binomial transform is involution Answer 1: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 2: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 3: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 4: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 5: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 6: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 7: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 8: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 9: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 10: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 11: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 12: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 13: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 14: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 15: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 16: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 17: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 18: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 19: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 20: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 21: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 22: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 23: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 24: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 25: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 26: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 27: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 28: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 29: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 30: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 31: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 32: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 33: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 34: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 35: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 36: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 37: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 38: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 39: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 40: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 41: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 42: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 43: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 44: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 45: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 46: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 47: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 48: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 49: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 50: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 51: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 52: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 53: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 54: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 55: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p> Answer 56: <p>If $g(t) = \sum_{n=0}^\infty x_n t^n$ is the generating function of $\{x_n\}$, then the generating function of $\{y_n\}$ is $$ \eqalign{h(t) &amp;= \sum_{n=0}^\infty \sum_{k=0}^n (-1)^k {n \choose k} x_k t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \sum_{n=k}^\infty {n \choose k} t^n\cr &amp;= \sum_{k=0}^\infty (-1)^k x_k \frac{t^k}{(1-t)^{k+1}} &amp;= \dfrac{1}{1-t} g\left(-\frac{t}{1-t}\right)}$$ If $s = -t/(1-t)$, then $t = -s/(1-s)$, and this says $$ g(s) = (1-t) h(t) = \dfrac{1}{1-s} h\left(-\frac{s}{1-s}\right)$$ Thus the generating function of the binomial transform of $\{y_n\}$ is $g$ again.</p> Answer 57: <p>Suppose we seek to prove that</p> <p>$$b_n = \sum_{k=0}^n (-1)^k {n\choose k} \sum_{q=0}^k (-1)^q {k\choose q} b_q.$$</p> <p>Reversing the two sums we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose k} {k\choose q}.$$</p> <p>We have</p> <p>$${n\choose k} {k\choose q} = {n\choose q} {n-q\choose k-q}$$ </p> <p>and we obtain</p> <p>$$\sum_{q=0}^n (-1)^q b_q \sum_{k=q}^n (-1)^k {n\choose q} {n-q\choose k-q} \\ = \sum_{q=0}^n (-1)^q {n\choose q} b_q \sum_{k=q}^n (-1)^k {n-q\choose k-q} \\ = \sum_{q=0}^n {n\choose q} b_q \sum_{k=0}^{n-q} (-1)^k {n-q\choose k}.$$</p> <p>Now the inner sum is $(-1)^0 {0\choose 0} = 1$ when $q=n$ and $(1-1)^{n-q} = 0$ otherwise. This leaves</p> <p>$${n\choose n} b_n = b_n.$$</p> Answer 58: <p>It is also convenient to show the involution property with the help of <em><a href="https://en.wikipedia.org/wiki/Generating_function#Exponential_generating_function" rel="nofollow noreferrer">exponential generating functions</a></em>.</p> <blockquote> <p>Let \begin{align*} A(t)=\sum_{n=0}^\infty x_{n}\frac{t^n}{n!} \qquad\text{and}\qquad B(t)=\sum_{n=0}^\infty y_{n}\frac{t^n}{n!} \end{align*} the exponential generating functions of $\{x_n\}$ and $\{y_n\}$.</p> </blockquote> <p>Since \begin{align*} A(-t)e^t&amp;=\left(\sum_{k=0}^\infty x_{k}\frac{(-t)^k}{k!}\right)\left(\sum_{l=0}^\infty \frac{t^l}{l!}\right)\\ &amp;=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{(-1)^kx_k}{k!}\cdot\frac{1}{l!}\right)t^n\\ &amp;=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}(-1)^kx_k\right)\frac{t^n}{n!}\\ &amp;=\sum_{n=0}^\infty y_n\frac{t^n}{n!}\\ &amp;=B(t) \end{align*}</p> <blockquote> <p>we obtain the following correspondence \begin{align*} B(t)=A(-t)e^t\qquad\longleftrightarrow\qquad y_n=\sum_{k=0}^n\binom{n}{k}(-1)^k x_{k}\tag{1} \end{align*}</p> </blockquote> <p>Multiplication with $e^{-t}$ and substitution $t$ with $-t$ gives \begin{align*} A(t)=B(-t)e^{t} \end{align*}</p> <blockquote> <p>According to (1) we conclude due to symmetry</p> <p>\begin{align*} A(t)=B(-t)e^t\qquad\longleftrightarrow\qquad x_n=\sum_{k=0}^n \binom{n}{k}(-1)^ky_{k} \end{align*} and the claim follows since sequence and its binomial transform have the same generating function.</p> </blockquote> Answer 59: <p>Let $L$ be the linear transformation from the space of polynomials in $t$ into the space of scalars that is determined by $$ L(t^n) = x_n \text{ for } n = 0,1,2,\ldots. $$ Then $$ y_n = \sum_{k=0}^n (-1)^k \binom n k x_k = \sum_{k=0}^n (-1)^k \binom n k L(t^k) = L\left( \sum_{k=0}^n \binom n k (-t)^k \right) = L((1-t)^n). $$ And \begin{align} x_n &amp; = L(t^n) = L((1-(1-t))^n) = L \left( \sum_{k=0}^n (-1)^k \binom n k (1-t)^k \right) \\[10pt] &amp; = \sum_{k=0}^n (-1)^k \binom n k L((1-t)^k) = \sum_{k=0}^n (-1)^k \binom n k y_k. \end{align} (As far as I know, this argument is due to Gian-Carlo Rota.)</p> Answer 60: <p>Consider the linear transformation of polynomials <span class="math-container">$P(t) \mapsto P(1-t)$</span>, clearly an involution <span class="math-container">$$a_0 + a_1 X + a_2 X^2 + \cdots \mapsto a_0 + a_1(1-X) + a_2 (1-X)^2 + \cdots $$</span> so the map in terms of the coefficients is <span class="math-container">$(a_i) \mapsto (b_i)$</span> where <span class="math-container">$$b_i = \sum_{k\ge i}(-1)^k \binom{k}{i} a_k$$</span> We see that we binomial transform is the dual (transpose) of this involution, so again an involution.</p>
Algebraic dimension of the projective space
2,082,942
6
https://math.stackexchange.com/questions/2082942/algebraic-dimension-of-the-projective-space
<p>This is the exercise 2.1.3 from Huybrechts's "Complex Geometry - An Introduction". </p> <blockquote> <p>Determine the algebraic dimension of the following manifolds: $\mathbb{P}^1$, $\mathbb{P}^n$, and the complex torus $\mathbb{C}/(\mathbb{Z}+i\mathbb{Z})$. For the latter, you might need to recall some basic facts on the Weierstrass $\wp$-function. How big is the function field of $\mathbb{C}$?</p> </blockquote> <p>The algebraic dimension of $X$ is defined to be the <a href="https://en.wikipedia.org/wiki/Transcendence_degree" rel="noreferrer">transcendence degree</a> of the function field $K(X)$, the field of meromorphic functions on $X$. And there is a propostion 2.1.9 (Siegel) states that the algebraic dimension of a compact connected manifold is not greater than the geometric (complex) dimension.</p> <p>My attempt: </p> <p>For $\mathbb{P}^1$ this is trivial: the algebraic dimension is $1$ since there are non-trivial meromorphic functions (e.g. the identity map is holomorphic) on the Riemann sphere, so the dimension is greater than $0$.</p> <p>The function field of $\mathbb{C}$ should have an infinite algebraic dimension since the functions $f(z)=z^k, k=1, 2, \cdots$ are holomorphic.</p> <p>Question: how about $\mathbb{P}^n$ (the projective space) and the complex torus? Note that the RR theorem are not studied yet so I'm seeking a solution without it.</p>
[ { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" }, { "answer_id": 2082956, "score": 4, "text": "<p>Although $z^k$ is homolomorphic for $k = 1, 2, \\ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\\mathbb{C}$. Consider $e^z$, for example.</p>\n\n<p>I think you've proven that $\\mathbb{P}^1$ has dimension $\\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\\mathbb{P}^1$ is just the usual field of rational functions $\\mathbb{C}(z)$. This may also give you some inspiration for $\\mathbb{P}^n$: try showing that its function field is isomorphic to $\\mathbb{C}(z_1, \\ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \\ldots, Z_n$ are the homogeneous coordinates on $\\mathbb{P}^n$.)</p>\n\n<p>Recall that the function field $\\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\\wp$ satisfies the differential equation\n$$\n(\\wp')^2 = \\wp^3 + c_4 \\wp + c_6\n$$\n$\\wp$ and $\\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href=\"https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions\" rel=\"nofollow noreferrer\">here</a> for more.) Thus $\\mathbb{C}(T)$ has transcendence degree $1$.</p>\n" }, { "answer_id": 4597686, "score": 0, "text": "<p>For the projective space we can argue as follows:</p>\n<p>We can define a rational function <span class=\"math-container\">$f\\in \\Bbb{C}(\\xi_1,....,\\xi_n)$</span> on <span class=\"math-container\">$U_0=\\{z_0\\ne 0\\}\\cong \\Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class=\"math-container\">$\\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class=\"math-container\">$$\\Bbb{C}(\\xi_1,...,\\xi_n)\\to \\mathcal{M}(\\Bbb{CP}^n)\\\\f\\mapsto F$$</span> where <span class=\"math-container\">$F$</span> is the extension of the rational function <span class=\"math-container\">$f$</span> as we did above.</p>\n<p>it's an injective map.</p>\n<p>Therefore <span class=\"math-container\">$ n = \\text{trdeg} _{\\Bbb{C}} \\Bbb{C}(\\xi_1,...,\\xi_n) \\le \\text{trdeg} _{\\Bbb{C}} \\mathcal{M}(\\Bbb{CP}^n)$</span></p>\n<p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p>\n<p><span class=\"math-container\">$$\\text{trdeg}_{\\Bbb{C}}\\mathcal{M}(\\Bbb{CP}^n) = n$$</span></p>\n" } ]
Question: Algebraic dimension of the projective space Answer 1: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 2: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 3: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 4: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 5: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 6: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 7: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 8: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 9: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 10: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 11: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 12: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 13: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 14: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 15: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 16: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 17: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 18: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 19: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 20: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 21: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 22: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p> Answer 23: <p>Although $z^k$ is homolomorphic for $k = 1, 2, \ldots$, these functions are not algebraically independent. For instance $(z^j)^k - (z^k)^j = 0$, so $z^j$ and $z^k$ are algebraically dependent for any $j,k$. However, there are more exotic holomorphic (and meromorphic) functions on $\mathbb{C}$. Consider $e^z$, for example.</p> <p>I think you've proven that $\mathbb{P}^1$ has dimension $\geq 1$, but why is it $=1$? (<strong>EDIT</strong>: Ah, now I see that you're probably arguing that the geometric dimension is $1$ and using the proposition you cited.) Try showing that the function field of $\mathbb{P}^1$ is just the usual field of rational functions $\mathbb{C}(z)$. This may also give you some inspiration for $\mathbb{P}^n$: try showing that its function field is isomorphic to $\mathbb{C}(z_1, \ldots, z_n)$, where $z_i = Z_i/Z_0$. (Here $Z_0, Z_1, \ldots, Z_n$ are the homogeneous coordinates on $\mathbb{P}^n$.)</p> <p>Recall that the function field $\mathbb{C}(T)$ of a torus is generated by the Weierstrass $\wp$-function and its derivative. (This is proved, for instance, in Stein and Shakarchi's <em>Complex Analysis</em>, p. 271.) Since $\wp$ satisfies the differential equation $$ (\wp')^2 = \wp^3 + c_4 \wp + c_6 $$ $\wp$ and $\wp'$ are algebraically dependent. (This is because a complex torus is an elliptic curve; see <a href="https://en.wikipedia.org/wiki/Weierstrass%27s_elliptic_functions" rel="nofollow noreferrer">here</a> for more.) Thus $\mathbb{C}(T)$ has transcendence degree $1$.</p> Answer 24: <p>For the projective space we can argue as follows:</p> <p>We can define a rational function <span class="math-container">$f\in \Bbb{C}(\xi_1,....,\xi_n)$</span> on <span class="math-container">$U_0=\{z_0\ne 0\}\cong \Bbb{C}^n$</span>, then we can extend it to a meromorphic function on <span class="math-container">$\Bbb{CP}^n$</span> since the transition function is rational, the composition of them is still rational. Therefore we have a map <span class="math-container">$$\Bbb{C}(\xi_1,...,\xi_n)\to \mathcal{M}(\Bbb{CP}^n)\\f\mapsto F$$</span> where <span class="math-container">$F$</span> is the extension of the rational function <span class="math-container">$f$</span> as we did above.</p> <p>it's an injective map.</p> <p>Therefore <span class="math-container">$ n = \text{trdeg} _{\Bbb{C}} \Bbb{C}(\xi_1,...,\xi_n) \le \text{trdeg} _{\Bbb{C}} \mathcal{M}(\Bbb{CP}^n)$</span></p> <p>Finally you know the Siegel theorem, which gives bound in the other direction therefore:</p> <p><span class="math-container">$$\text{trdeg}_{\Bbb{C}}\mathcal{M}(\Bbb{CP}^n) = n$$</span></p>
Is there any double series that cannot exchange the sum?
2,082,950
6
https://math.stackexchange.com/questions/2082950/is-there-any-double-series-that-cannot-exchange-the-sum
<p>I want to find a sequence $(u_{n,p})_{(n,p)\in\mathbb{N}^2}$ that satisfied: $$ \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} ~\text{is convergent} $$ $$ \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p} ~\text{is convergent too} $$ but $$ \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p} \neq \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} $$</p>
[ { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" }, { "answer_id": 2083024, "score": 4, "text": "<p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \\neq n$ and $u_{mn} =0$ if $m = n.$</p>\n\n<p>Note that</p>\n\n<p>$$\\frac{\\pi^2}{12} = \\sum_{n=1}^\\infty \\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} \\neq \\sum_{m=1}^\\infty \\sum_{n=1, n \\neq m}^\\infty \\frac{1}{m^2 - n^2} = - \\frac{\\pi^2}{12}.$$</p>\n\n<p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p>\n\n<p>To find the sum, use</p>\n\n<p>$$\\begin{align}\\sum_{m=1, m \\neq n}^\\infty \\frac{1}{m^2 - n^2} &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\sum_{m=1, m \\neq n}^M \\left(\\frac{1}{m-n} - \\frac{1}{m+n} \\right) \\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\sum_{k=1}^{M-n} \\frac{1}{k} - \\sum_{k=1}^{n-1} \\frac{1}{k} - \\sum_{k=n+1}^{M+n} \\frac{1}{k}\\right)\\\\ &amp;= \\lim_{M \\to \\infty} \\frac{1}{2n} \\left(\\frac{1}{n}- \\frac{1}{M-n+1} - \\ldots - \\frac{1}{M+n} \\right) \\\\ &amp;= \\frac{1}{2n^2} \\end{align}$$</p>\n" }, { "answer_id": 2083055, "score": 2, "text": "<p>Let <span class=\"math-container\">$$u_{n,p} = \\begin{cases}\n1 &amp; \\text{ if } n = p+1,\\\\\n-1 &amp; \\text{ if } n = p,\\\\\n0 &amp; \\text{ otherwise}.\n\\end{cases}$$</span>\nOn the one hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{n=0}^{\\infty}\\sum_{p=0}^{\\infty}u_{n,p} &amp;= u_{0,0} + \\sum_{n=1}^{\\infty}(u_{n,n} + u_{n,n-1}) &amp;\\text{(all other terms are 0)}\\\\\n&amp; = -1 + \\sum_{n=1}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp; = -1\n\\end{align}$$</span>\nOn the other hand, we have:\n<span class=\"math-container\">$$\\begin{align}\n\\sum_{p=0}^{\\infty}\\sum_{n=0}^{\\infty}u_{n,p}&amp; = \\sum_{p=0}^{\\infty}(u_{p,p} + u_{p+1,p}) &amp;\\text{(all other terms are 0)}\\\\\n&amp;= \\sum_{p=0}^{\\infty}(-1 + 1) &amp;\\text{(evaluate the terms)}\\\\\n&amp;= 0\n\\end{align}$$</span></p>\n" }, { "answer_id": 2083332, "score": 0, "text": "<blockquote>\n <p><strong>Theorem:</strong> Let $\\sum_{i=0}^\\infty a_i$ be a conditionally convergent series. Then for every $x\\in\\mathbb{R}$ there exists a $\\sigma$ in the permutation group of $\\mathbb{N}$ such that $$\\sum_{i=0}^\\infty a_{\\sigma(i)}=x$$</p>\n \n <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p>\n</blockquote>\n\n<p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\\sum a_i=\\sum(-1)^i/i$. The double summation gives an ordering on $\\{u_{n,p}\\}$, specifically the order that they will be summed in. You can picture this as a $\\mathbb{N}\\times\\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>\n" } ]
Question: Is there any double series that cannot exchange the sum? Answer 1: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 2: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 3: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 4: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 5: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 6: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 7: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 8: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 9: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 10: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 11: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 12: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 13: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 14: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 15: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 16: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 17: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 18: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 19: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 20: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 21: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 22: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 23: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 24: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 25: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 26: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 27: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 28: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 29: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 30: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 31: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 32: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 33: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p> Answer 34: <p>Let $u_{mn} = 1/(m^2 - n^2)$ if $m \neq n$ and $u_{mn} =0$ if $m = n.$</p> <p>Note that</p> <p>$$\frac{\pi^2}{12} = \sum_{n=1}^\infty \sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} \neq \sum_{m=1}^\infty \sum_{n=1, n \neq m}^\infty \frac{1}{m^2 - n^2} = - \frac{\pi^2}{12}.$$</p> <p>By anti-symmetry, the double sum changes sign with an interchange of indices.</p> <p>To find the sum, use</p> <p>$$\begin{align}\sum_{m=1, m \neq n}^\infty \frac{1}{m^2 - n^2} &amp;= \lim_{M \to \infty} \frac{1}{2n} \sum_{m=1, m \neq n}^M \left(\frac{1}{m-n} - \frac{1}{m+n} \right) \\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\sum_{k=1}^{M-n} \frac{1}{k} - \sum_{k=1}^{n-1} \frac{1}{k} - \sum_{k=n+1}^{M+n} \frac{1}{k}\right)\\ &amp;= \lim_{M \to \infty} \frac{1}{2n} \left(\frac{1}{n}- \frac{1}{M-n+1} - \ldots - \frac{1}{M+n} \right) \\ &amp;= \frac{1}{2n^2} \end{align}$$</p> Answer 35: <p>Let <span class="math-container">$$u_{n,p} = \begin{cases} 1 &amp; \text{ if } n = p+1,\\ -1 &amp; \text{ if } n = p,\\ 0 &amp; \text{ otherwise}. \end{cases}$$</span> On the one hand, we have: <span class="math-container">$$\begin{align} \sum_{n=0}^{\infty}\sum_{p=0}^{\infty}u_{n,p} &amp;= u_{0,0} + \sum_{n=1}^{\infty}(u_{n,n} + u_{n,n-1}) &amp;\text{(all other terms are 0)}\\ &amp; = -1 + \sum_{n=1}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp; = -1 \end{align}$$</span> On the other hand, we have: <span class="math-container">$$\begin{align} \sum_{p=0}^{\infty}\sum_{n=0}^{\infty}u_{n,p}&amp; = \sum_{p=0}^{\infty}(u_{p,p} + u_{p+1,p}) &amp;\text{(all other terms are 0)}\\ &amp;= \sum_{p=0}^{\infty}(-1 + 1) &amp;\text{(evaluate the terms)}\\ &amp;= 0 \end{align}$$</span></p> Answer 36: <blockquote> <p><strong>Theorem:</strong> Let $\sum_{i=0}^\infty a_i$ be a conditionally convergent series. Then for every $x\in\mathbb{R}$ there exists a $\sigma$ in the permutation group of $\mathbb{N}$ such that $$\sum_{i=0}^\infty a_{\sigma(i)}=x$$</p> <p>Put in words, this theorem says that given a conditionally convergent series, for every real number there is some reordering of the sum such that the reordered sum gives the real number.</p> </blockquote> <p>We can use this theorem to construct examples of series of the sort that you are interested in. Let $\sum a_i=\sum(-1)^i/i$. The double summation gives an ordering on $\{u_{n,p}\}$, specifically the order that they will be summed in. You can picture this as a $\mathbb{N}\times\mathbb{N}$ grid, where each row corresponds to some fixed $p$ and each column corresponds to some fixed $n$. Now we wish to lay out our sequence $a_i$ in this ordering so that going rows by columns gives a different answer than going columns by rows. Doing this is straight forward and simply requires making each row and column a conditionally convergent series as there are no repeated terms which guarantees that reordering the sum will change the value.</p>
How to show $\sum\limits_{r=0}^n \frac{1}{r!} \lt\left (1 + \frac{1}{n}\right)^{n+1}$ for all $n \ge 1$?
2,082,986
6
https://math.stackexchange.com/questions/2082986/how-to-show-sum-limits-r-0n-frac1r-lt-left-1-frac1n-right
<p>Using the binomial expansion, it is quite is easy to show that $$\left(1+\frac{1}{n}\right)^n \le \sum_{r=0}^{n} \frac{1}{r!} $$ for all $n\in\mathbb{Z^+}$, with equality holds when $n=1.$ (Can it be proved by the mathematical induction?)<br> But it seemes to me really difficult to prove that $$\sum_{r=0}^{n} \frac{1}{r!}\lt \left(1+\frac{1}{n}\right)^{n+1}$$ for all $n\in\mathbb{Z^+}$. Can anyone prove it?<br> As it can be proved that the sequences $\{(1+\frac{1}{n})^n\}$ and $\{(1+\frac{1}{n})^{n+1}\}$ converge to the same limit, the above inequalities will help establish the equivalence between the following definitions of e : $$ \begin{align} e &amp;= \lim_{n\to \infty}\left(1+\frac{1}{n}\right)^n\\ e &amp;= \sum_{r=0}^{\infty} \frac{1}{r!} \end{align} $$</p>
[ { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" }, { "answer_id": 2083015, "score": 2, "text": "<p>Just $$\\sum_{r=0}^{n} \\frac{1}{r!}&lt;\\sum_{r=0}^{\\infty} \\frac{1}{r!}=e\\lt \\left(1+\\frac{1}{n}\\right)^{n+1}$$ \nbecause easy to show that $f(x)=\\left(1+\\frac{1}{x}\\right)^{x+1}$ is a decreasing function on $(0,+\\infty)$.</p>\n" }, { "answer_id": 2142125, "score": 1, "text": "<p><strong>Answer Attempt: Inequality too Weak</strong></p>\n\n<p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p>\n\n<hr>\n\n<p>We assume that\n$$\\left(1+\\frac{1}{n}\\right)^{n} \\le \\sum_{r=0}^{n} \\frac{1}{r!} \\tag{1}$$\nAnd we want to prove that<br>\n$$\\left(\\frac{n+2}{n+1}\\right)^{n+2} \\le \\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1} \\frac{1}{r!} \\tag{2}$$\nAs usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that\n$$\\left(1+\\frac{1}{n+1}\\right)\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\sum_{r=0}^{n+1}\\frac{1}{r!}+\\frac{1}{n+1}\\sum_{r=0}^{n+1}\\frac{1}{r!}$$\n$$= \\frac{1}{(n+1)!}+\\sum_{r=0}^{n}\\frac{1}{r!}+\\frac{1}{(n+1)(n+1)!}+\\frac{1}{n+1}\\sum_{r=0}^{n}\\frac{1}{r!}$$</p>\n\n<p>$$\\ge \\frac{n+2}{(n+1)^2 n!}+\\left(1+\\frac{1}{n}\\right)^n\\left(\\frac{n+2}{n+1}\\right)$$\nUnfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>\n" } ]
Question: How to show $\sum\limits_{r=0}^n \frac{1}{r!} \lt\left (1 + \frac{1}{n}\right)^{n+1}$ for all $n \ge 1$? Answer 1: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 2: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 3: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 4: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 5: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 6: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 7: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 8: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 9: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 10: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 11: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 12: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 13: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 14: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 15: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 16: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 17: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 18: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 19: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 20: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 21: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 22: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p> Answer 23: <p>Just $$\sum_{r=0}^{n} \frac{1}{r!}&lt;\sum_{r=0}^{\infty} \frac{1}{r!}=e\lt \left(1+\frac{1}{n}\right)^{n+1}$$ because easy to show that $f(x)=\left(1+\frac{1}{x}\right)^{x+1}$ is a decreasing function on $(0,+\infty)$.</p> Answer 24: <p><strong>Answer Attempt: Inequality too Weak</strong></p> <p><em>Too long for comment, but I wanted to see if anyone could fix any errors in this proof. Feel free to make it community wiki if possible.</em></p> <hr> <p>We assume that $$\left(1+\frac{1}{n}\right)^{n} \le \sum_{r=0}^{n} \frac{1}{r!} \tag{1}$$ And we want to prove that<br> $$\left(\frac{n+2}{n+1}\right)^{n+2} \le \left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1} \frac{1}{r!} \tag{2}$$ As usual, we assume this to be true. OP already proved $n=1$ is a base case. We now have that $$\left(1+\frac{1}{n+1}\right)\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \sum_{r=0}^{n+1}\frac{1}{r!}+\frac{1}{n+1}\sum_{r=0}^{n+1}\frac{1}{r!}$$ $$= \frac{1}{(n+1)!}+\sum_{r=0}^{n}\frac{1}{r!}+\frac{1}{(n+1)(n+1)!}+\frac{1}{n+1}\sum_{r=0}^{n}\frac{1}{r!}$$</p> <p>$$\ge \frac{n+2}{(n+1)^2 n!}+\left(1+\frac{1}{n}\right)^n\left(\frac{n+2}{n+1}\right)$$ Unfortunately, it looks like the inequality bounds are barely not tight enough, because this final term is smaller than we need it to be. Given that I appealed only to the induction hypothesis so far this is midly surprising; perhaps the argument can be fixed from here?</p>
When is the fraction $\frac{n+7}{2n+7}$ the square of a rational number?
2,083,696
6
https://math.stackexchange.com/questions/2083696/when-is-the-fraction-fracn72n7-the-square-of-a-rational-number
<blockquote> <p>Find all positive integers $n$ such that the fraction $\dfrac{n+7}{2n+7}$ is the square of a rational number.</p> </blockquote> <p>The answer says $n = 9,57,477$, but how do we prove this? I wrote $\dfrac{n+7}{2n+7} = \dfrac{a^2}{b^2}$ for some $a,b \in \mathbb{Z^+}$ and $\gcd(a,b) = 1$. We know that $\gcd(n+7,2n+7) = \gcd(n+7,7) = 1,7$ and so $n+7$ and $2n+7$ must both be perfect squares or their greatest common divisor is $7$. How do we find all values for which this is the case?</p>
[ { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" }, { "answer_id": 2083840, "score": 4, "text": "<p>$$\\gcd(n+7,2n+7)=\\gcd(n+7,n)=\\gcd(7,n)\\in\\{1,7\\}$$</p>\n\n<p>If the $\\gcd=1$, then $n+7,2n+7$ are both squares:</p>\n\n<p>$$n+7=a^2,2n+7=b^2\\implies 2a^2-b^2=7$$</p>\n\n<p>If the $\\gcd=7$, then $\\frac{n+7}{7},\\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p>\n\n<p>$$m+1=a^2,2m+1=b^2\\implies2a^2-b^2=1$$</p>\n\n<p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p>\n\n<ul>\n<li><p>The first equation has solutions starting with\n$\\{(2,1),(8,11),\\cdots\\}$</p></li>\n<li><p>The second equation has solutions starting with\n$\\{(1,1),(5,7),(41,29),\\cdots\\}$.</p></li>\n</ul>\n\n<p>These can be generated by looking at $(3+\\sqrt2)(1+\\sqrt2)^{2k},(1+\\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\\frac{1}{1+\\sqrt2}=-(1-\\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\\ge0$.</p>\n\n<p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p>\n\n<p><strong>Clarification:</strong> When I say \"these can be generated by looking at $(a+b\\sqrt2)^k$\", I refer to the fact that by writing $(a+b\\sqrt2)^k=A_k+B_k\\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\\sqrt2\\to a^2-2b^2$ is multiplicative.</p>\n" }, { "answer_id": 2085106, "score": 3, "text": "<p>Because $2(n+7)-(2n+7)=7$, we know that $\\gcd(n+7,2n+7)\\mid7$.</p>\n\n<p>Since we only want positive $n$, this implies one of</p>\n\n<ol>\n<li><p>$n+7$ and $2n+7$ are squares, or</p></li>\n<li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p>\n\n<hr></li>\n</ol>\n\n<p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation\n$$\n2p^2-q^2=7\\tag{1}\n$$\nEquation $(1)$ has two sequences of solutions:\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (2,1)\\ \\ \\ }^{n=-3},\\overbrace{\\ \\ (8,11)\\ \\ }^{n=57},\\overbrace{\\ (46,65)\\ }^{n=2109},\\dots\\}\\tag{2}\n$$\nand\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (4,5)\\ \\ \\ }^{n=9},\\overbrace{\\ (22,31)\\ }^{n=477},\\overbrace{(128,181)}^{n=16377},\\dots\\}\\tag{3}\n$$\nBoth of which satisfy the recursion\n$$\n(p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\\tag{4}\n$$\nWe can solve the recursion $(4)$ for $(2)$ with\n$$\np_k=\\tfrac14\\left(4+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(4-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{5}\n$$\nand\n$$\nq_k=\\tfrac12\\left(1+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac12\\left(1-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{6}\n$$\nAdditionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p>\n\n<p>Therefore, we get the solutions for $n$ as\n$$\n\\begin{align}\nn_k\n&amp;=p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac18\\left[\\left(9+4\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^{2k}+\\left(9-4\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^{2k}-42\\right]}\\tag{7}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}$. For example, if $k\\in\\{-5,-4,-3,-2,-1,0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\\}\n$$</p>\n\n<hr>\n\n<p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation\n$$\n2p^2-q^2=1\\tag{8}\n$$\nEquation $(8)$ has the solutions\n$$\n(p,q)\\in\\{\\overbrace{\\ \\ \\ (1,1)\\ \\ \\ }^{n=0},\\overbrace{\\ \\ \\ (5,7)\\ \\ \\ }^{n=168},\\overbrace{\\ (29,41)\\ }^{n=5880},\\dots\\}\\tag{9}\n$$\nwhich satisfies $(4)$.</p>\n\n<p>We can solve the recursion $(4)$ for $(9)$ with\n$$\np_k=\\tfrac14\\left(2+\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{10}\n$$\nand\n$$\nq_k=\\tfrac14\\left(2+2\\sqrt2\\right)\\left(3+2\\sqrt2\\right)^k+\\tfrac14\\left(2-2\\sqrt2\\right)\\left(3-2\\sqrt2\\right)^k\\tag{11}\n$$\nFormulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\\ge0$.\n$$\n\\begin{align}\nn_k\n&amp;=7p_k^2-7\\\\\n&amp;=\\bbox[5px,border:2px solid #C0A000]{\\frac78\\left[\\left(3+2\\sqrt2\\right)^{2k+1}+\\left(3-2\\sqrt2\\right)^{2k+1}-6\\right]}\\tag{12}\n\\end{align}\n$$\nwhere $k\\in\\mathbb{Z}^{\\ge0}$. For exmple, if $k\\in\\{0,1,2,3,4,5\\}$,\n$$\nn_k\\in\\{0,168,5880,199920,6791568,230713560\\}\n$$</p>\n\n<hr>\n\n<p><strong>Complete Closed and Recursive Solutions</strong></p>\n\n<p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\\ge1$, we get that the closed form for the complete list of solutions is\n$$\n\\hspace{-8mm}\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}\n=\\frac18\\left(\\,\\begin{bmatrix}9-4\\sqrt2\\\\9+4\\sqrt2\\\\21+14\\sqrt2\\end{bmatrix}\\left(3+2\\sqrt2\\right)^{2k}+\\begin{bmatrix}9+4\\sqrt2\\\\9-4\\sqrt2\\\\21-14\\sqrt2\\end{bmatrix}\\left(3-2\\sqrt2\\right)^{2k}-\\begin{bmatrix}42\\\\42\\\\42\\end{bmatrix}\\,\\right)}\\tag{13}\n$$\nand a recursive list of solutions is given by<br>\n$\\scriptsize\\begin{bmatrix}n_1\\\\n_2\\\\n_3\\end{bmatrix}=\\begin{bmatrix}9\\\\57\\\\168\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_4\\\\n_5\\\\n_6\\end{bmatrix}=\\begin{bmatrix}477\\\\2109\\\\5880\\end{bmatrix},\\quad\\scriptsize\\begin{bmatrix}n_7\\\\n_8\\\\n_9\\end{bmatrix}=\\begin{bmatrix}16377\\\\71817\\\\199920\\end{bmatrix},\\quad$ and\n$$\n\\bbox[5px,border:2px solid #C0A000]{\n\\begin{bmatrix}n_{3k+1}\\\\n_{3k+2}\\\\n_{3k+3}\\end{bmatrix}=35\\begin{bmatrix}n_{3k-2}\\\\n_{3k-1}\\\\n_{3k}\\end{bmatrix}-35\\begin{bmatrix}n_{3k-5}\\\\n_{3k-4}\\\\n_{3k-3}\\end{bmatrix}+\\begin{bmatrix}n_{3k-8}\\\\n_{3k-7}\\\\n_{3k-6}\\end{bmatrix}\n}\\tag{14}\n$$</p>\n" }, { "answer_id": 2083707, "score": 2, "text": "<p>There are more solutions (n&lt;1000000):\n$$\n\\begin{array}{c}\n 9 \\\\\n 57 \\\\\n 168 \\\\\n 477 \\\\\n 2109 \\\\\n 5880 \\\\\n 16377 \\\\\n 71817 \\\\\n 199920 \\\\\n 556509 \\\\\n\\end{array}\n$$\nso you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p>\n\n<p>Added after observation of ross-millikan:</p>\n\n<p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p>\n\n<p>a) $n+7=a^2$ and $2n+7=b^2$</p>\n\n<p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p>\n\n<p>for b) n should be divisible by $7$</p>\n" } ]
Question: When is the fraction $\frac{n+7}{2n+7}$ the square of a rational number? Answer 1: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 2: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 3: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 4: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 5: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 6: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 7: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 8: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 9: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 10: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 11: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 12: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 13: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 14: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 15: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 16: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 17: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 18: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 19: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 20: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 21: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 22: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 23: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 24: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 25: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 26: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 27: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 28: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 29: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 30: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 31: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 32: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 33: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p> Answer 34: <p>$$\gcd(n+7,2n+7)=\gcd(n+7,n)=\gcd(7,n)\in\{1,7\}$$</p> <p>If the $\gcd=1$, then $n+7,2n+7$ are both squares:</p> <p>$$n+7=a^2,2n+7=b^2\implies 2a^2-b^2=7$$</p> <p>If the $\gcd=7$, then $\frac{n+7}{7},\frac{2n+7}{7}$ are both integer squares. Write $n=7m$:</p> <p>$$m+1=a^2,2m+1=b^2\implies2a^2-b^2=1$$</p> <p>Finding exact solutions from here requires study of Pell's equation, as indicated in the comments.</p> <ul> <li><p>The first equation has solutions starting with $\{(2,1),(8,11),\cdots\}$</p></li> <li><p>The second equation has solutions starting with $\{(1,1),(5,7),(41,29),\cdots\}$.</p></li> </ul> <p>These can be generated by looking at $(3+\sqrt2)(1+\sqrt2)^{2k},(1+\sqrt2)^{2k+1}$ respectively for integer $k$. Noting that $\frac{1}{1+\sqrt2}=-(1-\sqrt2)$ tells us that for the second of these sequences, we cann afford to simply look at $k\ge0$.</p> <p>In the first case, given $2a^2-b^2=7$, note that taking $n=b^2-a^2$ gives $a^2=n+7,b^2=2n+7$ as desired, i.e. the fraction in question is a square. A slight adaptation of this resolves the second case.</p> <p><strong>Clarification:</strong> When I say "these can be generated by looking at $(a+b\sqrt2)^k$", I refer to the fact that by writing $(a+b\sqrt2)^k=A_k+B_k\sqrt2$ for integers $A_k,B_k$, we will have $2B_k^2-A_k^2=1$ or $7$, depending on this sequence. This method makes use of techniques from algebraic number theory, where we learn that the function $N:a+b\sqrt2\to a^2-2b^2$ is multiplicative.</p> Answer 35: <p>Because $2(n+7)-(2n+7)=7$, we know that $\gcd(n+7,2n+7)\mid7$.</p> <p>Since we only want positive $n$, this implies one of</p> <ol> <li><p>$n+7$ and $2n+7$ are squares, or</p></li> <li><p>$n+7$ and $2n+7$ are each $7$ times a square.</p> <hr></li> </ol> <p><strong>Case 1:</strong> Let $p^2=n+7$. Then we need $q^2=2n-7=2p^2-7$. This gives the Pell equation $$ 2p^2-q^2=7\tag{1} $$ Equation $(1)$ has two sequences of solutions: $$ (p,q)\in\{\overbrace{\ \ \ (2,1)\ \ \ }^{n=-3},\overbrace{\ \ (8,11)\ \ }^{n=57},\overbrace{\ (46,65)\ }^{n=2109},\dots\}\tag{2} $$ and $$ (p,q)\in\{\overbrace{\ \ \ (4,5)\ \ \ }^{n=9},\overbrace{\ (22,31)\ }^{n=477},\overbrace{(128,181)}^{n=16377},\dots\}\tag{3} $$ Both of which satisfy the recursion $$ (p_k,q_k)=6(p_{k-1},q_{k-1})-(p_{k-2},q_{k-2})\tag{4} $$ We can solve the recursion $(4)$ for $(2)$ with $$ p_k=\tfrac14\left(4+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(4-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{5} $$ and $$ q_k=\tfrac12\left(1+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac12\left(1-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{6} $$ Additionally, $(5)$ and $(6)$ with negative $k$ gives $(3)$ with negative $q$.</p> <p>Therefore, we get the solutions for $n$ as $$ \begin{align} n_k &amp;=p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac18\left[\left(9+4\sqrt2\right)\left(3+2\sqrt2\right)^{2k}+\left(9-4\sqrt2\right)\left(3-2\sqrt2\right)^{2k}-42\right]}\tag{7} \end{align} $$ where $k\in\mathbb{Z}$. For example, if $k\in\{-5,-4,-3,-2,-1,0,1,2,3,4,5\}$, $$ n_k\in\{18905097,556509,16377,477,9,-3,57,2109,71817,2439837,82882809\} $$</p> <hr> <p><strong>Case 2:</strong> Let $7p^2=n+7$. Then we need $7q^2=2n+7=14p^2-7$. As above, this leads to the Pell's Equation $$ 2p^2-q^2=1\tag{8} $$ Equation $(8)$ has the solutions $$ (p,q)\in\{\overbrace{\ \ \ (1,1)\ \ \ }^{n=0},\overbrace{\ \ \ (5,7)\ \ \ }^{n=168},\overbrace{\ (29,41)\ }^{n=5880},\dots\}\tag{9} $$ which satisfies $(4)$.</p> <p>We can solve the recursion $(4)$ for $(9)$ with $$ p_k=\tfrac14\left(2+\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{10} $$ and $$ q_k=\tfrac14\left(2+2\sqrt2\right)\left(3+2\sqrt2\right)^k+\tfrac14\left(2-2\sqrt2\right)\left(3-2\sqrt2\right)^k\tag{11} $$ Formulas $(10)$ and $(11)$ with negaitve $k$ just give $(9)$ with negative $q$. Thus, we only need consider $k\ge0$. $$ \begin{align} n_k &amp;=7p_k^2-7\\ &amp;=\bbox[5px,border:2px solid #C0A000]{\frac78\left[\left(3+2\sqrt2\right)^{2k+1}+\left(3-2\sqrt2\right)^{2k+1}-6\right]}\tag{12} \end{align} $$ where $k\in\mathbb{Z}^{\ge0}$. For exmple, if $k\in\{0,1,2,3,4,5\}$, $$ n_k\in\{0,168,5880,199920,6791568,230713560\} $$</p> <hr> <p><strong>Complete Closed and Recursive Solutions</strong></p> <p>Collecting the results in $(7)$ and $(12)$, and reindexing for monotonicity with $k\ge1$, we get that the closed form for the complete list of solutions is $$ \hspace{-8mm}\bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix} =\frac18\left(\,\begin{bmatrix}9-4\sqrt2\\9+4\sqrt2\\21+14\sqrt2\end{bmatrix}\left(3+2\sqrt2\right)^{2k}+\begin{bmatrix}9+4\sqrt2\\9-4\sqrt2\\21-14\sqrt2\end{bmatrix}\left(3-2\sqrt2\right)^{2k}-\begin{bmatrix}42\\42\\42\end{bmatrix}\,\right)}\tag{13} $$ and a recursive list of solutions is given by<br> $\scriptsize\begin{bmatrix}n_1\\n_2\\n_3\end{bmatrix}=\begin{bmatrix}9\\57\\168\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_4\\n_5\\n_6\end{bmatrix}=\begin{bmatrix}477\\2109\\5880\end{bmatrix},\quad\scriptsize\begin{bmatrix}n_7\\n_8\\n_9\end{bmatrix}=\begin{bmatrix}16377\\71817\\199920\end{bmatrix},\quad$ and $$ \bbox[5px,border:2px solid #C0A000]{ \begin{bmatrix}n_{3k+1}\\n_{3k+2}\\n_{3k+3}\end{bmatrix}=35\begin{bmatrix}n_{3k-2}\\n_{3k-1}\\n_{3k}\end{bmatrix}-35\begin{bmatrix}n_{3k-5}\\n_{3k-4}\\n_{3k-3}\end{bmatrix}+\begin{bmatrix}n_{3k-8}\\n_{3k-7}\\n_{3k-6}\end{bmatrix} }\tag{14} $$</p> Answer 36: <p>There are more solutions (n&lt;1000000): $$ \begin{array}{c} 9 \\ 57 \\ 168 \\ 477 \\ 2109 \\ 5880 \\ 16377 \\ 71817 \\ 199920 \\ 556509 \\ \end{array} $$ so you just plug these values of $n$ and check otherwise the question does not make sense. Also One solution in your list is missing for n&lt;500.</p> <p>Added after observation of ross-millikan:</p> <p>$n+7$ and $2n+7$ can have common multiplier, is so than $2*(n+7)-(2n+7)$ is also devisible by the same number which is clearly $7$, so the theorem is that if numerator and denominator have a common multiplier it can only be $7$. Thus we have two possibilities</p> <p>a) $n+7=a^2$ and $2n+7=b^2$</p> <p>b) $n+7=7 a^2$ and $2n+7=7 b^2$</p> <p>for b) n should be divisible by $7$</p>
Books or authors who write about how to write mathematics properly
2,083,738
6
https://math.stackexchange.com/questions/2083738/books-or-authors-who-write-about-how-to-write-mathematics-properly
<p>I've always been interested how to write maths properly. Depending of the knowledge of the reader, you can't solve every detail of a proof of a theorem. You have to show the most important facts, unless your proof turns out tedious and languish. I'm writing a little paper about this to my students and I would like to know some authors who discuss about this, I need some references.</p>
[ { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" }, { "answer_id": 2085051, "score": 2, "text": "<p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him \"<a href=\"https://www.youtube.com/watch?v=ECQyFzzBHlo\" rel=\"nofollow noreferrer\">How to write mathematics badly</a>\". Gives very good insights that you could pass to your students.</p>\n\n<p>I would also suggest <a href=\"https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false\" rel=\"nofollow noreferrer\">Mathematical Writing</a>, of Donald Knuth</p>\n" }, { "answer_id": 2085052, "score": 2, "text": "<p>See these books:</p>\n\n<ul>\n<li><p><a href=\"http://bookstore.ams.org/pmw\" rel=\"nofollow noreferrer\">A Primer of Mathematical Writing</a>\nby Steven G. Kranz.</p></li>\n<li><p><a href=\"http://epubs.siam.org/doi/book/10.1137/1.9780898719550\" rel=\"nofollow noreferrer\">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li>\n</ul>\n" } ]
Question: Books or authors who write about how to write mathematics properly Answer 1: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 2: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 3: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 4: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 5: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 6: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 7: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 8: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 9: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 10: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 11: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 12: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 13: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 14: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 15: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 16: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 17: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 18: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 19: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 20: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 21: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 22: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul> Answer 23: <p>Jean Pierre Serre is in some sense a reference on good article writing. I would watch the awesome lecture of him "<a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow noreferrer">How to write mathematics badly</a>". Gives very good insights that you could pass to your students.</p> <p>I would also suggest <a href="https://books.google.pt/books?id=dDOehHMbUMcC&amp;lpg=PP1&amp;dq=Mathematical%20Writing%20knuth&amp;pg=PP1#v=onepage&amp;q=Mathematical%20Writing%20knuth&amp;f=false" rel="nofollow noreferrer">Mathematical Writing</a>, of Donald Knuth</p> Answer 24: <p>See these books:</p> <ul> <li><p><a href="http://bookstore.ams.org/pmw" rel="nofollow noreferrer">A Primer of Mathematical Writing</a> by Steven G. Kranz.</p></li> <li><p><a href="http://epubs.siam.org/doi/book/10.1137/1.9780898719550" rel="nofollow noreferrer">Handbook of Writing for the Mathematical Sciences</a> by Nicholas J. Hingham.</p></li> </ul>
Let $X$ be standard normal and $a&gt;b&gt;0$, prove that $\lim\limits_{\epsilon\to 0}\epsilon^2\log P(|\epsilon X -a|&lt;b)=-\frac{(a-b)^2}{2}$
2,083,763
6
https://math.stackexchange.com/questions/2083763/let-x-be-standard-normal-and-ab0-prove-that-lim-limits-epsilon-to-0
<blockquote> <p>Let $X$ be a standard normal random variable, with $a,b&gt;0$ and $a-b&gt;0$, prove that $$\lim_{\epsilon\to 0}\epsilon^2\log P(|\epsilon X -a|&lt;b)=-\frac{(a-b)^2}{2}$$</p> </blockquote> <p>I'm studying for a qual and this was <a href="https://www.math.purdue.edu/files/academic/grad/qualexams/MA51900/MA519-aug15.pdf" rel="nofollow noreferrer">a previous problem</a> (problem 6b). From part a, we know $\lim_{\epsilon\to 0} P(|\epsilon X -a|&lt;b)=0$</p> <p>I tried using L'Hopital's rule:</p> <p>$$\frac{\log P(|\epsilon X -a|&lt;b)}{1/\epsilon^2}=\frac{1}{-2/\epsilon^3}\frac{1}{P(|\epsilon X -a|&lt;b)}\frac{d}{d\epsilon }P(|\epsilon X -a|&lt;b)$$</p> <p>Then we need to compute $\frac{d}{d\epsilon }P(|\epsilon X -a|&lt;b)$:</p> <p>$$\frac{d}{d\epsilon }P(|\epsilon X -a|&lt;b)=\frac{d}{d\epsilon} \int_{(a-b)/\epsilon}^{(a+b)/\epsilon} \frac{1}{\sqrt{2\pi}}e^{-t^2/2}~dt$$</p> <p>Here is where I am stuck. Of course I can use fundamental theorem of calculus but it turns into a huge mess. How do I proceed from here?</p>
[ { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" }, { "answer_id": 2083951, "score": 8, "text": "<p><a href=\"https://en.wikipedia.org/wiki/Laplace&#39;s_method\" rel=\"noreferrer\">Laplace-type methods</a> are the classical approach to this kind of <a href=\"https://en.wikipedia.org/wiki/Large_deviations_theory\" rel=\"noreferrer\">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that\n$$\nP(|\\epsilon X -a|&lt;b)=\\frac1{\\sqrt{2\\pi}}I(\\epsilon)\\quad \\text{with}\\quad I(\\epsilon)=\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon} e^{-t^2/2}~dt\n$$\nThe change of variable \n$$\nt=(a-b)/\\epsilon+s\\epsilon\n$$\nyields\n$$\nI(\\epsilon)=e^{-(a-b)^2/2\\epsilon^2}\\epsilon J(\\epsilon)\\quad \\text{with}\\quad J(\\epsilon)=\\int_{0}^{2b/\\epsilon^2}e^{-s^2\\epsilon^2/2-s(a-b)}~ds\n$$\nNow, for every $\\epsilon$,\n$$\nJ(\\epsilon)\\leqslant\\int_{0}^{\\infty}e^{-s(a-b)}~ds=\\frac1{a-b}\n$$\nand, for every $\\epsilon$ in $(0,1)$, $2b/\\epsilon^2\\geqslant2b/\\epsilon$ hence\n$$\nJ(\\epsilon)\\geqslant\\int_{0}^{2b/\\epsilon}e^{-2b^2-s(a-b)}~ds=\\frac{e^{-2b^2}}{a-b}-o(1)\n$$\nFinally, $J(\\epsilon)=\\Theta(1)$ hence $\\log J(\\epsilon)=\\Theta(1)$ and, as desired,\n$$\n\\epsilon^2\\log P(|\\epsilon X -a|&lt;b)=\\epsilon^2\\log I(\\epsilon)+\\Theta(\\epsilon^2\\log\\epsilon)=\\color{red}{-\\tfrac12(a-b)^2}+\\Theta(\\epsilon^2\\log\\epsilon)\n$$ \nMore generally, for every interval $B$ (and this case can be still further extended to other Borel sets), \n$$\n\\epsilon^2\\log P(\\epsilon X \\in B)=-\\tfrac12\\inf_{x\\in B}x^2\n$$</p>\n" }, { "answer_id": 2083865, "score": 3, "text": "<p>Let $I(\\epsilon)$ be the integral given by </p>\n\n<p>$$I(\\epsilon)=\\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt$$</p>\n\n<p>Then, we have the following estimates. An upper bound for $I(\\epsilon)$ is </p>\n\n<p>$$I(\\epsilon)= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\le \\frac{1}{\\sqrt{2\\pi}}e^{-(a-b)^2/(2\\epsilon^2)}\\frac{b}{2\\epsilon}\\tag 1$$</p>\n\n<p>For $0&lt;\\epsilon&lt;2b$, a lower bound for $I(\\epsilon)$ is</p>\n\n<p>$$\\begin{align}\nI(\\epsilon)&amp;= \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a+b)/\\epsilon}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}}\\int_{(a-b)/\\epsilon}^{(a-b)/\\epsilon+1}e^{-t^2/2}\\,dt\\\\\\\\\n&amp;\\ge \\frac{1}{\\sqrt{2\\pi}} e^{-\\frac12\\left(\\frac{a-b}{\\epsilon}+1\\right)^2}\\\\\\\\\n&amp;=\\frac{1}{\\sqrt{2\\pi e}}\\,e^{-(a-b)/\\epsilon}\\,e^{-(a-b)^2/(2\\epsilon^2)}\\tag 2 \n\\end{align}$$</p>\n\n<p>Using $(1)$ and $(2)$ together shows</p>\n\n<p>$$-\\frac{(a-b)^2}2-\\epsilon(a-b)+\\epsilon^2 \\log\\left(\\frac{1}{\\sqrt{2\\pi e}}\\right) \\le \\epsilon^2 \\log(I(\\epsilon))\\le -\\frac{(a-b)^2}2 +\\epsilon^2 \\log\\left(\\frac{b}{2\\sqrt{2\\pi}\\,\\epsilon}\\right)$$</p>\n\n<p>whence application of the squeeze theorem yields the coveted limit.</p>\n" } ]
Question: Let $X$ be standard normal and $a&gt;b&gt;0$, prove that $\lim\limits_{\epsilon\to 0}\epsilon^2\log P(|\epsilon X -a|&lt;b)=-\frac{(a-b)^2}{2}$ Answer 1: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 2: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 3: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 4: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 5: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 6: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 7: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 8: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 9: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 10: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 11: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 12: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 13: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 14: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 15: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 16: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 17: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 18: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 19: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 20: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 21: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 22: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p> Answer 23: <p><a href="https://en.wikipedia.org/wiki/Laplace&#39;s_method" rel="noreferrer">Laplace-type methods</a> are the classical approach to this kind of <a href="https://en.wikipedia.org/wiki/Large_deviations_theory" rel="noreferrer">large deviations</a> results but it might be worth to detail the computations in the present case. We already know that $$ P(|\epsilon X -a|&lt;b)=\frac1{\sqrt{2\pi}}I(\epsilon)\quad \text{with}\quad I(\epsilon)=\int_{(a-b)/\epsilon}^{(a+b)/\epsilon} e^{-t^2/2}~dt $$ The change of variable $$ t=(a-b)/\epsilon+s\epsilon $$ yields $$ I(\epsilon)=e^{-(a-b)^2/2\epsilon^2}\epsilon J(\epsilon)\quad \text{with}\quad J(\epsilon)=\int_{0}^{2b/\epsilon^2}e^{-s^2\epsilon^2/2-s(a-b)}~ds $$ Now, for every $\epsilon$, $$ J(\epsilon)\leqslant\int_{0}^{\infty}e^{-s(a-b)}~ds=\frac1{a-b} $$ and, for every $\epsilon$ in $(0,1)$, $2b/\epsilon^2\geqslant2b/\epsilon$ hence $$ J(\epsilon)\geqslant\int_{0}^{2b/\epsilon}e^{-2b^2-s(a-b)}~ds=\frac{e^{-2b^2}}{a-b}-o(1) $$ Finally, $J(\epsilon)=\Theta(1)$ hence $\log J(\epsilon)=\Theta(1)$ and, as desired, $$ \epsilon^2\log P(|\epsilon X -a|&lt;b)=\epsilon^2\log I(\epsilon)+\Theta(\epsilon^2\log\epsilon)=\color{red}{-\tfrac12(a-b)^2}+\Theta(\epsilon^2\log\epsilon) $$ More generally, for every interval $B$ (and this case can be still further extended to other Borel sets), $$ \epsilon^2\log P(\epsilon X \in B)=-\tfrac12\inf_{x\in B}x^2 $$</p> Answer 24: <p>Let $I(\epsilon)$ be the integral given by </p> <p>$$I(\epsilon)=\frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt$$</p> <p>Then, we have the following estimates. An upper bound for $I(\epsilon)$ is </p> <p>$$I(\epsilon)= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\le \frac{1}{\sqrt{2\pi}}e^{-(a-b)^2/(2\epsilon^2)}\frac{b}{2\epsilon}\tag 1$$</p> <p>For $0&lt;\epsilon&lt;2b$, a lower bound for $I(\epsilon)$ is</p> <p>$$\begin{align} I(\epsilon)&amp;= \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a+b)/\epsilon}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}}\int_{(a-b)/\epsilon}^{(a-b)/\epsilon+1}e^{-t^2/2}\,dt\\\\ &amp;\ge \frac{1}{\sqrt{2\pi}} e^{-\frac12\left(\frac{a-b}{\epsilon}+1\right)^2}\\\\ &amp;=\frac{1}{\sqrt{2\pi e}}\,e^{-(a-b)/\epsilon}\,e^{-(a-b)^2/(2\epsilon^2)}\tag 2 \end{align}$$</p> <p>Using $(1)$ and $(2)$ together shows</p> <p>$$-\frac{(a-b)^2}2-\epsilon(a-b)+\epsilon^2 \log\left(\frac{1}{\sqrt{2\pi e}}\right) \le \epsilon^2 \log(I(\epsilon))\le -\frac{(a-b)^2}2 +\epsilon^2 \log\left(\frac{b}{2\sqrt{2\pi}\,\epsilon}\right)$$</p> <p>whence application of the squeeze theorem yields the coveted limit.</p>
How to solve $8n^2=64n\cdot \log_2(n)$?
2,083,916
6
https://math.stackexchange.com/questions/2083916/how-to-solve-8n2-64n-cdot-log-2n
<p>How would you be able to solve for n in this setup?</p> <p>$$8n^2=64n\cdot \log_2(n)$$</p> <p>I tried tons of online step-by-step solutions but the only one who got it right was wolfram, but I can't watch the step-by-step solution.</p> <p>Any help will be appreciated!</p>
[ { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" }, { "answer_id": 2083932, "score": 5, "text": "<p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\\log_2{n}\\iff \\frac{8n}{64}=\\log_2{(n)}\\iff2^{\\frac{n}8}=2^{\\log_2(n)}\\iff 2^{\\frac{n}{8}}=n\\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p>\n" }, { "answer_id": 2083936, "score": 3, "text": "<p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p>\n\n<p>$$ n = 8 \\log_2 n $$</p>\n\n<p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p>\n" }, { "answer_id": 2084000, "score": 2, "text": "<p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p>\n\n<p>Going back to your question, </p>\n\n<p>$n = 8\\log_2(n) \\Leftrightarrow \\frac{n}{8} = \\frac{\\ln(n)}{\\ln(2)} \\Leftrightarrow -\\frac{\\ln(2)}{8} = -\\ln(n)\\frac{1}{n} \\Leftrightarrow \n-\\frac{\\ln(2)}{8} = \\ln(\\frac{1}{n})e^{\\ln(\\frac{1}{n})}$</p>\n\n<p>Thus by noting that $z = \\ln(\\frac{1}{n})$ and $k = -\\frac{\\ln(2)}{8}$, we get </p>\n\n<p>$W(-\\frac{\\ln(2)}{8}) = \\ln(\\frac{1}{n}) \\Leftrightarrow n = e^{-W(-\\frac{\\ln(2)}{8})} $</p>\n" }, { "answer_id": 2084292, "score": 1, "text": "<p>Consider the equation $$0=8x^2-64x\\, \\log_2(x)=8x(x-8\\;\\log_2(x))=8 x \\left(x-\\frac{8 \\log (x)}{\\log (2)}\\right)$$ So, there is the trivial solution $x=0$.</p>\n\n<p>Now, consider the function $$f(x)=x-\\frac{8 \\log (x)}{\\log (2)}\\implies f'(x)=1-\\frac{8}{x \\log (2)}\\implies f''(x)=\\frac{8}{x^2 \\log (2)}$$ The first derivative cancels at $x=\\frac{8}{\\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\\left(\\frac{8}{\\log (2)}\\right)=\\frac{8 }{\\log (2)}\\left(1+\\log \\left(\\frac{\\log (2)}{8}\\right)\\right)\\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p>\n\n<p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p>\n\n<p>Starting at a \"reasonable\" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\\frac{8 x_n (\\log (x_n)-1)}{x_n \\log (2)-8}$$</p>\n\n<p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 1.0000000000000000000 \\\\\n 1 &amp; 1.0948626170101320293 \\\\\n 2 &amp; 1.0999837708827475300 \\\\\n 3 &amp; 1.0999970301492763006 \\\\\n 4 &amp; 1.0999970302376094009 \n\\end{array}\n\\right)$$</p>\n\n<p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates \n$$\\left(\n\\begin{array}{cc}\n n &amp; x_n \\\\\n 0 &amp; 50.000000000000000000 \\\\\n 1 &amp; 43.695596437287775210 \\\\\n 2 &amp; 43.559336941386572072 \\\\\n 3 &amp; 43.559260436905874322 \\\\\n 4 &amp; 43.559260436881656414 \n\\end{array}\n\\right)$$</p>\n\n<p>The other solution is to use <a href=\"https://en.wikipedia.org/wiki/Lambert_W_function\" rel=\"nofollow noreferrer\">Lambert function</a> and the solutions are then given by $$x_1=-\\frac{8 }{\\log (2)}W\\left(-\\frac{\\log (2)}{8}\\right)\\qquad , \\qquad x_2=-\\frac{8 }{\\log (2)}W_{-1}\\left(-\\frac{\\log (2)}{8}\\right)$$</p>\n\n<p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>\n" } ]
Question: How to solve $8n^2=64n\cdot \log_2(n)$? Answer 1: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 2: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 3: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 4: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 5: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 6: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 7: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 8: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 9: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 10: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 11: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 12: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 13: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 14: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 15: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 16: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 17: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 18: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 19: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 20: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 21: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 22: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 23: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 24: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 25: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 26: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 27: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 28: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 29: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 30: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 31: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 32: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 33: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 34: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 35: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 36: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 37: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 38: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 39: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 40: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 41: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 42: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 43: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 44: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p> Answer 45: <p>You may simplify $n$ from both sides ($n=0$ is not an acceptable solution) to get $$8n=64\log_2{n}\iff \frac{8n}{64}=\log_2{(n)}\iff2^{\frac{n}8}=2^{\log_2(n)}\iff 2^{\frac{n}{8}}=n\iff2^n=n^8$$ So, you are looking for the zeros of $f(n)=2^n-n^8$. This function has two solutions, which are approximately $n_1=1.0999$ and $n_2=43.559$. However, I do not think that there are methods other than numerical that can solve this problem.</p> Answer 46: <p>We certainly cannot have $n=0$, so a division is safe. Upon dividing by $8n$ we obtain</p> <p>$$ n = 8 \log_2 n $$</p> <p>which is a nonlinear equation. You can try fixed-point methods or even Newton's Method to solve this. As Jimmy says, I don't think this can be done analytically.</p> Answer 47: <p>Instead of numerical methods, you can also express the solution in terms of the Lambert W function. In some sense, you can think of this idea as someone has tabulated the solution to the Lambert W function numerically and you are simply reusing his work. The W function is defined by the relation, $W(z e^z) = z$. Another way to understand this relation is that if $W(k) = z$ then $k = z e^z$. </p> <p>Going back to your question, </p> <p>$n = 8\log_2(n) \Leftrightarrow \frac{n}{8} = \frac{\ln(n)}{\ln(2)} \Leftrightarrow -\frac{\ln(2)}{8} = -\ln(n)\frac{1}{n} \Leftrightarrow -\frac{\ln(2)}{8} = \ln(\frac{1}{n})e^{\ln(\frac{1}{n})}$</p> <p>Thus by noting that $z = \ln(\frac{1}{n})$ and $k = -\frac{\ln(2)}{8}$, we get </p> <p>$W(-\frac{\ln(2)}{8}) = \ln(\frac{1}{n}) \Leftrightarrow n = e^{-W(-\frac{\ln(2)}{8})} $</p> Answer 48: <p>Consider the equation $$0=8x^2-64x\, \log_2(x)=8x(x-8\;\log_2(x))=8 x \left(x-\frac{8 \log (x)}{\log (2)}\right)$$ So, there is the trivial solution $x=0$.</p> <p>Now, consider the function $$f(x)=x-\frac{8 \log (x)}{\log (2)}\implies f'(x)=1-\frac{8}{x \log (2)}\implies f''(x)=\frac{8}{x^2 \log (2)}$$ The first derivative cancels at $x=\frac{8}{\log (2)}$ and the second derivative test shows that this point corresponds to a minimum $$f\left(\frac{8}{\log (2)}\right)=\frac{8 }{\log (2)}\left(1+\log \left(\frac{\log (2)}{8}\right)\right)\approx -16.6886 &lt;0$$ So, there are two roots to $f(x)=0$.</p> <p>As already said in comments and answers, to compute the roots, one of the solutions is based on numerical methods (Newton being probably the simplest). A quick plot of the function shows a root close to $1$ and another close to $50$.</p> <p>Starting at a "reasonable" guess $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ In your case, this would give $$x_{n+1}=\frac{8 x_n (\log (x_n)-1)}{x_n \log (2)-8}$$</p> <p>For the first root, let us start using $x_0=1$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 1.0000000000000000000 \\ 1 &amp; 1.0948626170101320293 \\ 2 &amp; 1.0999837708827475300 \\ 3 &amp; 1.0999970301492763006 \\ 4 &amp; 1.0999970302376094009 \end{array} \right)$$</p> <p>For the second root, let us start using $x_0=50$. Newton method will generate the following iterates $$\left( \begin{array}{cc} n &amp; x_n \\ 0 &amp; 50.000000000000000000 \\ 1 &amp; 43.695596437287775210 \\ 2 &amp; 43.559336941386572072 \\ 3 &amp; 43.559260436905874322 \\ 4 &amp; 43.559260436881656414 \end{array} \right)$$</p> <p>The other solution is to use <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a> and the solutions are then given by $$x_1=-\frac{8 }{\log (2)}W\left(-\frac{\log (2)}{8}\right)\qquad , \qquad x_2=-\frac{8 }{\log (2)}W_{-1}\left(-\frac{\log (2)}{8}\right)$$</p> <p>Sooner or later, you will learn that any equation which can write or rewrite as $$A+Bx+C\log(D+Ex)=0$$ has solution(s) which can be expressed in terms of Lambert function.</p>
Prove $\sum _{n=1}^{\infty }\:\frac{x^n}{n}$ is equal to $-\ln\left(1-x\right)$, for $|x| &lt; 1$
2,083,960
6
https://math.stackexchange.com/questions/2083960/prove-sum-n-1-infty-fracxnn-is-equal-to-ln-left1-x-right
<p>Prove $$\sum _{n=1}^{\infty }\:\frac{x^n}{n} = -\ln\left(1-x\right),$$ for $|x| &lt; 1$.</p> <p>Pretty much title I've searched and seen some solutions using Taylor Series but isn't there a more intuative way to do this?</p>
[ { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" }, { "answer_id": 2083974, "score": 8, "text": "<p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br>\nAdditionally, $\\frac{x^n}{n}=\\int_{0}^{x}y^{n-1}\\,dy$, hence:\n$$ \\sum_{n=1}^{N}\\frac{x^n}{n} = \\int_{0}^{x}\\frac{1-y^N}{1-y}\\,dy =-\\log(1-x)-\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy.$$\nby exchanging $\\sum$ and $\\int$. Now we just need to consider the limit as $N\\to +\\infty$, and that is just $-\\log(1-x)$ since $|x|&lt;1$ grants:\n$$ \\left|\\int_{0}^{x}\\frac{y^N}{1-y}\\,dy\\right|\\leq |x|^N \\int_{0}^{x}\\frac{dy}{1-y}\\to 0.$$</p>\n" }, { "answer_id": 2083972, "score": 4, "text": "<p>Here is intuitive:</p>\n\n<p>$$-\\ln(1-x)=\\int_0^x\\frac1{1-t}\\ dt=\\int_0^x\\left(1+t+t^2+t^3+\\dots\\ \\right)\\ dx\\\\=x+\\frac{x^2}2+\\frac{x^3}3+\\frac{x^4}4+\\dots$$</p>\n" } ]
Question: Prove $\sum _{n=1}^{\infty }\:\frac{x^n}{n}$ is equal to $-\ln\left(1-x\right)$, for $|x| &lt; 1$ Answer 1: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 2: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 3: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 4: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 5: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 6: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 7: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 8: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 9: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 10: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 11: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 12: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 13: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 14: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 15: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 16: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 17: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 18: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 19: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 20: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 21: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 22: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p> Answer 23: <p>Since $|x|&lt;1$, the series in the LHS is absolutely convergent by comparison with a geometric series.<br> Additionally, $\frac{x^n}{n}=\int_{0}^{x}y^{n-1}\,dy$, hence: $$ \sum_{n=1}^{N}\frac{x^n}{n} = \int_{0}^{x}\frac{1-y^N}{1-y}\,dy =-\log(1-x)-\int_{0}^{x}\frac{y^N}{1-y}\,dy.$$ by exchanging $\sum$ and $\int$. Now we just need to consider the limit as $N\to +\infty$, and that is just $-\log(1-x)$ since $|x|&lt;1$ grants: $$ \left|\int_{0}^{x}\frac{y^N}{1-y}\,dy\right|\leq |x|^N \int_{0}^{x}\frac{dy}{1-y}\to 0.$$</p> Answer 24: <p>Here is intuitive:</p> <p>$$-\ln(1-x)=\int_0^x\frac1{1-t}\ dt=\int_0^x\left(1+t+t^2+t^3+\dots\ \right)\ dx\\=x+\frac{x^2}2+\frac{x^3}3+\frac{x^4}4+\dots$$</p>
Solving $\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi}$ using L&#39;H&#244;pital&#39;s rule
2,083,962
6
https://math.stackexchange.com/questions/2083962/solving-lim-limits-x-to-infty-frac-sinxx-pi-using-lh%c3%b4pitals-rul
<blockquote> <p>I know how to solve this using the squeeze theorem, but I am supposed to solve only using L'Hôpital's rule</p> <p>$$\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi}$$</p> </blockquote> <p><strong>I tried</strong>: $$\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi} = \lim\limits_{x \to \infty} \frac{d/dx[\sin(x)]}{d/dx[x-\pi]} = \lim\limits_{x \to \infty} \frac{\cos(x)}{1}$$</p> <p>From here I am stuck because the rule no longer applies and using $\infty$ for $x$ doesn't not help to simplify.</p> <p>Logically the limit is $0$ because $\sin(x)$ can only be $-1$ to $1$, but this is using squeeze theorem.</p> <p>Is still there any way to solve this without using the squeeze theorem?</p>
[ { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" }, { "answer_id": 2083969, "score": 6, "text": "<p>I bet that your problem is, in fact, to compute</p>\n\n<p>$$\\lim _{x \\to \\color{red} \\pi} \\frac {\\sin x} {x - \\pi} ,$$</p>\n\n<p>so either you have mistyped $\\infty$ instead of $\\pi$, or there is a typo in the text where you took this from.</p>\n\n<p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p>\n\n<p>$$\\lim _{x \\to \\pi} \\frac {\\sin x} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - 0} {x - \\pi} = \\lim _{x \\to \\pi} \\frac {\\sin x - \\sin \\pi} {x - \\pi} = (\\sin ') (\\pi) = \\cos \\pi = -1 .$$</p>\n" }, { "answer_id": 2083966, "score": 4, "text": "<p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p>\n\n<p>Note that \n$$\n\\left|\\frac{\\sin x}{x-\\pi}\\right|\\leq\\frac{1}{|x-\\pi|}\\to 0\n$$\nas $x\\to\\infty$. </p>\n\n<p>What is the point <em>not</em> to use the squeeze theorem?</p>\n" }, { "answer_id": 2083964, "score": 2, "text": "<p>L'Hopital's Rule only works when the limit is $0/0$ or $\\infty/\\infty$. In this case, the limit is undefined over $\\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p>\n" }, { "answer_id": 2083965, "score": 0, "text": "<p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\\frac\\infty\\infty$ or $\\frac00$.</p>\n\n<p>As it is neither, you should simply use squeeze theorem.</p>\n" } ]
Question: Solving $\lim\limits_{x \to \infty} \frac{\sin(x)}{x-\pi}$ using L&#39;H&#244;pital&#39;s rule Answer 1: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 2: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 3: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 4: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 5: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 6: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 7: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 8: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 9: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 10: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 11: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 12: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 13: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 14: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 15: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 16: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 17: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 18: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 19: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 20: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 21: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 22: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 23: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 24: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 25: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 26: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 27: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 28: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 29: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 30: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 31: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 32: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 33: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 34: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 35: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 36: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 37: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 38: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 39: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 40: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 41: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 42: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 43: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 44: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p> Answer 45: <p>I bet that your problem is, in fact, to compute</p> <p>$$\lim _{x \to \color{red} \pi} \frac {\sin x} {x - \pi} ,$$</p> <p>so either you have mistyped $\infty$ instead of $\pi$, or there is a typo in the text where you took this from.</p> <p>In this case, L'Hospital's theorem could be used, but it's not necessary, because</p> <p>$$\lim _{x \to \pi} \frac {\sin x} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - 0} {x - \pi} = \lim _{x \to \pi} \frac {\sin x - \sin \pi} {x - \pi} = (\sin ') (\pi) = \cos \pi = -1 .$$</p> Answer 46: <p>Unfortunately L'Hopital would not help since this does not satisfy the <em>assumptions</em>. </p> <p>Note that $$ \left|\frac{\sin x}{x-\pi}\right|\leq\frac{1}{|x-\pi|}\to 0 $$ as $x\to\infty$. </p> <p>What is the point <em>not</em> to use the squeeze theorem?</p> Answer 47: <p>L'Hopital's Rule only works when the limit is $0/0$ or $\infty/\infty$. In this case, the limit is undefined over $\infty$, so you cannot use L'Hopital's Rule on this problem (at least the way it's written.)</p> Answer 48: <p>You can't. To apply L'Hospital's rule, it must be indeterminate form i.e. $\frac\infty\infty$ or $\frac00$.</p> <p>As it is neither, you should simply use squeeze theorem.</p>
Confused about the definition of a random sample, statistics and estimators/estimates
2,084,035
6
https://math.stackexchange.com/questions/2084035/confused-about-the-definition-of-a-random-sample-statistics-and-estimators-esti
<p>I'm currently studying basic statistics, and I don't really understand the definition of <em>random sample</em> in the book I'm reading (<em>Introduction to Probability and Statistics: Principles and Applications for Engineering and the Computing Sciences, 4th Edition</em>).</p> <p>The book defines random samples as follows:</p> <blockquote> <p>[...] the term "random sample" is used in three different but closely related ways in applied statistics. It may refer to the <em>objects</em> selected for study, to the <em>random variables</em> associated with the objects to be selected, or to the <em>numerical values</em> assumed by those variables. <p> A random sample of size $n$ from the distribution of $X$ is a collection of $n$ independent random variables, each with the same distribution as $X$. <p> [...] The objects selected generate $n$ <em>numbers</em> $x_1,x_2,x_3,\ldots,x_n$ which are the observed values of the random variables $X_1,X_2,X_3,\ldots,X_n$.</p> </blockquote> <p>At this point, I don't really see the point of thinking of a random sample as a collection of random variables, considering the data gathered for an experiment is a set of constants.</p> <p>A <em>statistic</em> is defined by the book as "a random variable whose numerical value can be determined from a random sample". As an example, the <em>sample mean</em> statistic of a sample (a set of random variables) $\{1, 2, 3\}$ is defined as $$ \bar{X} = \frac{X_1 + \cdots + X_n}{n} $$ where $X_{1..n}$ are random variables (which means that $\bar{X}$ is also a random variable), as opposed to $$ \bar{x} = \frac{x_1 + \cdots + x_n}{n} $$ where $x_{1..n}$ are constant numbers (which means that $\bar{x}$ is a single number).</p> <p>Now, since $X_1, \ldots, X_n$ are random variables but each have a single, known value ($X_i$ assumes the value $x_i$), $\bar{X}$ is also a random variable which assumes a known value -- i.e., $(x_1 + \cdots + x_n)/n$. (Please correct me if I'm wrong here.)</p> <p>Here's my confusion. I think I've understood the concept of statistics and estimators -- to provide an estimation for population parameters using the characteristics of a sample. However, I don't see a reason behind thinking of a random sample as a set of random variables -- as opposed to the (in my eyes) more "natural" way of thinking of them as simply a set of numerical constants -- or thinking of a statistic like the sample mean, median, range, etc. as a random variable rather than a single, constant numerical value.</p> <hr> <p>This is my first post here, so the question probably has problems with structure, length and clarity, but what I'm essentially asking is for some "justification" for thinking of <strong>a)</strong> a random sample as a set of random variables as opposed to a set of numbers, and <strong>b)</strong> a statistic as a random variable rather than a constant.</p>
[ { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" }, { "answer_id": 2084077, "score": 1, "text": "<p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p>\n\n<p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p>\n\n<p>If I also calculate the mean, I will have a different value for the \"sample statistic\" each time: first $5$, then $6.5$ and then $4.5$</p>\n\n<p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p>\n" }, { "answer_id": 2084103, "score": 1, "text": "<p>In a statistical model, you assume that the sample $x_1\\ldots x_n$ is drawn from some distribution $f_{X_1\\ldots X_n}(x_1,\\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p>\n\n<p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p>\n\n<p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\\bar x$ and estimate $\\hat \\mu = \\bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p>\n\n<ol>\n<li><p>what is the error in our estimate. </p></li>\n<li><p>is our model any good, anyway?</p></li>\n</ol>\n\n<p>It turns out considering sample statistics as random variables is useful for both of these.</p>\n\n<p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\\mu$, then the sample mean of $n$ samples will have distribution $\\mathcal{N}(\\mu, \\frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p>\n\n<p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p>\n\n<p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>\n" } ]
Question: Confused about the definition of a random sample, statistics and estimators/estimates Answer 1: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 2: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 3: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 4: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 5: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 6: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 7: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 8: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 9: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 10: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 11: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 12: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 13: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 14: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 15: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 16: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 17: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 18: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 19: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 20: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 21: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 22: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p> Answer 23: <p>You need to remember that we don't usually take just one sample, but often many repeated samples from the same population.</p> <p>Suppose we take a sample of size $2$ from some large population. The first time I might get $3$ and $7$. The second time I might get $9$ and $4$. The third time I might get $6$ and $3$. Each time I take a fresh sample, the two values are potentially different to what they were before. It is natural to think of these as being random variables because I know the properties of random variables.</p> <p>If I also calculate the mean, I will have a different value for the "sample statistic" each time: first $5$, then $6.5$ and then $4.5$</p> <p>Again, because the sample mean is changing each time, it is natural to treat the sample statistic as a random variable.</p> Answer 24: <p>In a statistical model, you assume that the sample $x_1\ldots x_n$ is drawn from some distribution $f_{X_1\ldots X_n}(x_1,\ldots ,x_n)$ so the actual numerical values of the sample are a realization of the random variables $X_1\ldots X_n.$ This is a <em>model</em>. We are assuming that the sample is an outcome of a random process.</p> <p>The reason this is useful, and also why considering sample statistics like the sample mean as random variables is useful, is that we can use the sample to fit and validate the model. So if we have a model in mind and we collect a sample, it is useful to know what the probability of that sample is under the model (this is called likelihood). If we draw a very improbable sample (in some sense), it will cast doubt on the validitity of the model. And if the model has free parameters, we can choose the parameters as ones that are consistent with our sample (e.g. maximum likelihood estimation).</p> <p>For instance, take a simple artificial example where we believe that our sample is independent normals with known variance $1$. But we don't know the mean. To fit the mean from the data, we look at the sample mean $\bar x$ and estimate $\hat \mu = \bar x.$ It turns out that this provides the best estimate for the mean under the model. However, we'd like to know two other things </p> <ol> <li><p>what is the error in our estimate. </p></li> <li><p>is our model any good, anyway?</p></li> </ol> <p>It turns out considering sample statistics as random variables is useful for both of these.</p> <p>We usually assume 2 when considering 1 (otherwise, what would be the point). So for 1, we want to know the error of the estimate. Obviously if we have like 5 samples, our error will be pretty big, and will decrease if we have more samples. We can make this quantitative by looking at the distribution of the sample mean (as as random variable). If the true mean of the normals is $\mu$, then the sample mean of $n$ samples will have distribution $\mathcal{N}(\mu, \frac{1}{n}),$ i.e. will be normal with variance $1/n$ (standard deviation $1/\sqrt{n}$), This tells you that the sample mean from the sample you draw will probably be within about $1/\sqrt{n}$ of the true mean. This gives an immediate estimate of the error and you can get more quantitative if you wish (i.e. confidence intervals).</p> <p>For 2, we could look at our sample and take the sample variance. If our sample is large, we'd expect this to come out to close to 1. If it doesn't, our model might be wrong. Say we from a sample of 100 we get a sample variance of 1.1. Does that mean our model is wrong? In order to get a quantitative handle on the question, it's useful to consider the distribution of the sample variance under our model. Then we could compute something like $Prob(\mbox{sample variance} &gt; 1.1)$ under the model. If this is very low, we have cause to question the assumptions of our model. In this case, the natural assumption to question is that the $X_i$'s have variance 1. </p> <p>We could then relax that assumption and fit a new model where the variance is unknown. Then if we are still skeptical of some of our assumptions, we could think of another statistic to test (say, 4th moment or autocorrelation) and do it the same as before. If these come out looking unlikely, maybe the assumption of normality or independence is wrong.</p>
How to prove that a function is differentiable everywhere?
2,084,088
6
https://math.stackexchange.com/questions/2084088/how-to-prove-that-a-function-is-differentiable-everywhere
<p>The function I come up with is:</p> <p>$f(x)= -5x$</p> <p>and I think this function can be differentiated everywhere because the domain is $\mathbb R$ right?</p> <p>$f'(x) = -5$ which is constant, but my question is that how can I use theorem to prove my answer.</p>
[ { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" }, { "answer_id": 2084092, "score": 22, "text": "<p>To show that $f$ is differentiable at all $x \\in \\Bbb R$, we must show that $f'(x)$ exists at all $x \\in \\Bbb R$.</p>\n\n<p>Recall that $f$ is differentiable at $x$ if $\\lim\\limits_{h \\to 0} \\frac{f(x+h)-f(x)}{h}$ exists.</p>\n\n<p>So for $f(x)=-5x$, we examine\n$$\\lim\\limits_{h \\to 0} \\frac{-5(x+h)- (-5x)}{h} = \\lim\\limits_{h \\to 0} \\frac{-5h}{h} = \\lim\\limits_{h \\to 0}{-5} = -5$$\nAnd so we see that $f$ is differentiable at all $x \\in \\Bbb R$ with derivative $f'(x)= -5$.</p>\n\n<hr>\n\n<p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \\; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p>\n" }, { "answer_id": 4183829, "score": 1, "text": "<p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p>\n<p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p>\n<p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p>\n<p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>\n" } ]
Question: How to prove that a function is differentiable everywhere? Answer 1: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 2: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 3: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 4: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 5: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 6: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 7: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 8: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 9: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 10: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 11: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 12: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 13: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 14: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 15: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 16: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 17: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 18: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 19: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 20: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 21: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 22: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p> Answer 23: <p>To show that $f$ is differentiable at all $x \in \Bbb R$, we must show that $f'(x)$ exists at all $x \in \Bbb R$.</p> <p>Recall that $f$ is differentiable at $x$ if $\lim\limits_{h \to 0} \frac{f(x+h)-f(x)}{h}$ exists.</p> <p>So for $f(x)=-5x$, we examine $$\lim\limits_{h \to 0} \frac{-5(x+h)- (-5x)}{h} = \lim\limits_{h \to 0} \frac{-5h}{h} = \lim\limits_{h \to 0}{-5} = -5$$ And so we see that $f$ is differentiable at all $x \in \Bbb R$ with derivative $f'(x)= -5$.</p> <hr> <p>We could also say that if $g(x)$ and $h(x)$ are differentiable, then so too is $f(x) = g(x) h(x)$ and that $f'(x) = g'(x) h(x) + g(x) h'(x)$. Then let $g(x) = x, \; h(x) = -5$, noting that both are differentiable with derivates $1$ and $0$ respectively, leading to the same result.</p> Answer 24: <p>Since we need to prove that the function is differentiable everywhere, in other words, we are proving that the derivative of the function is defined everywhere.</p> <p>In the given function, the derivative, as you have said, is a constant <strong>(-5)</strong>. This constant is independent of the value of <em><strong>x</strong></em>.</p> <p>In other words, the derivative of the function remains the same, regardless of what real value you assign to the variable <em><strong>x</strong></em>. This implies that the derivative certainly exists for all real values of <em><strong>x</strong></em>.</p> <p><strong>Note:</strong> The derivative of the function need not be constant (as it was in this case), for the function to be differentiable. The only condition is: <strong>the derivative should be defined</strong> in the interval where the function is differentiable.</p>
Differentiating a function of a variable with respect to the variable&#39;s derivative
2,084,681
6
https://math.stackexchange.com/questions/2084681/differentiating-a-function-of-a-variable-with-respect-to-the-variables-derivati
<p>Suppose $x:\mathbb{R}\to \mathbb{R}$ is parameterised by $\lambda$. What does it mean to take a derivative of a function $f(x)$ with respect to $\dot{x} = \frac{dx}{d\lambda}$. i.e. what does $\frac{df(x)}{d\dot{x}}$ mean? How do we compute it?</p> <p>Is $\frac{d}{d\dot{x}}=\frac{d}{d\frac{dx}{d\lambda}}=^{??} \frac{d\lambda}{dx}=^{??} 0$ ???</p> <p>For example, how would one compute $\frac{d}{d\dot{x}} e^x$?</p> <p>(This question has arisen from an undergraduate relativity course, in trying to compute the Euler-Lagrange equations, given a certain metric).</p>
[ { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" }, { "answer_id": 2085001, "score": 11, "text": "<p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\\frac{\\partial L}{\\partial\\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p>\n\n<p>But the Lagrangian is special because it is given as a function of $x$ and $\\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p>\n\n<p>First, define</p>\n\n<p>$$v(\\lambda) = \\dot{x}(\\lambda)=\\frac{dx}{d\\lambda}$$</p>\n\n<p>Then, think of the symbol $\\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\\dot{x}$, replace it in your head or on paper with $v$.</p>\n\n<p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\\dot{x})$, which you can rewrite</p>\n\n<p>$$L(x,v)$$</p>\n\n<p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\\mathbb{R}^2\\to\\mathbb{R}$. So rewriting $\\frac{\\partial L}{\\partial \\dot{x}}$ as</p>\n\n<p>$$\\frac{\\partial L}{\\partial v}$$</p>\n\n<p>gives us something sensible.</p>\n\n<p>To sum up, the only reason the notation $\\frac{\\partial f}{\\partial \\dot{x}}$ works is because $f$ is given as a function of $x$ and $\\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p>\n" }, { "answer_id": 2084687, "score": 3, "text": "<p>Be careful with your second line it's best practice to keep it like $\\frac{d}{d\\dot{x}}$. It is really as simple as it looks. \nGiven some Langrangian say $L(x,\\dot{x})=\\frac{1}{2}m\\dot{x}^2 - V(x)$. Then $\\frac{\\partial L}{\\partial \\dot{x}} = m\\dot{x}$. </p>\n" }, { "answer_id": 2084744, "score": 1, "text": "<p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\\dot x$ in your case, we treat $x$ and $\\dot x$ as entirely separate. That is,</p>\n\n<p>$$\\frac{d}{dx}f(\\dot x) = 0.$$</p>\n\n<p>Likewise, $\\frac{d}{d \\dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p>\n\n<p>$$\\frac{\\partial}{\\partial \\phi} f(\\partial_\\mu\\phi) = 0$$</p>\n\n<p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\\dot x = \\frac{d}{dt}x$$</p>\n\n<p>in order to solve the equations of motion, or say, quantise the system.</p>\n" }, { "answer_id": 2084783, "score": 0, "text": "<p>This answer focus on the important issue of the question, which is explained in its body by OP.</p>\n\n<p>Let me present an alternative way of thinking, which purges the \"variable\" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p>\n\n<p>The Lagrangian, per se, is a function $L: \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \\mathbb{R} \\times \\mathbb{R}^n \\times \\mathbb{R}^n \\to \\mathbb{R}$). No derivatives involved. In this case, it is\n$$L(p,q)=\\frac{1}{2}mq^2-V(p).$$\nNow, let $\\widetilde{\\gamma}$ be the function $t \\mapsto (\\gamma(t),\\dot{\\gamma}(t))$.</p>\n\n<p>The Euler-Lagrange equations then state that\n$$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0,$$\nwhere $\\nabla_1L$ means the first $n$ coordinates of the gradient and $\\nabla_2L$ means the last $n$ coordinates.</p>\n\n<p>Let's compute this in our case. It is clear that $\\nabla_1L(p,q)=-\\nabla V(p)$, and $\\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that\n $$(\\nabla_1L) \\circ \\widetilde{\\gamma} -\\big((\\nabla_2L)\\circ \\widetilde{\\gamma}\\big)'=0$$\n$$\\implies (\\nabla_1L) (\\gamma(t), \\dot{\\gamma}(t)) -\\big((\\nabla_2L)(\\gamma(t),\\dot{\\gamma}(t))\\big)'=0$$\n$$\\implies -\\nabla V(\\gamma(t)) -(m \\dot{\\gamma}(t))'=0$$\n$$\\implies -\\nabla V(\\gamma(t))=m\\ddot{\\gamma\n}(t),$$\nwhich is what we would expect.</p>\n\n<p>\"Differentiating with respect to $\\dot{x}$\" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>\n" } ]
Question: Differentiating a function of a variable with respect to the variable&#39;s derivative Answer 1: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 2: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 3: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 4: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 5: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 6: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 7: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 8: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 9: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 10: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 11: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 12: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 13: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 14: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 15: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 16: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 17: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 18: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 19: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 20: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 21: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 22: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 23: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 24: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 25: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 26: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 27: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 28: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 29: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 30: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 31: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 32: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 33: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 34: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 35: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 36: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 37: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 38: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 39: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 40: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 41: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 42: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 43: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 44: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p> Answer 45: <p>I think many answers are missing the central point of confusion here. Your instincts are correct: in general the notation $\frac{\partial L}{\partial\dot{x}}$ makes absolutely no mathematical sense whatsoever. A mathematician would probably not use this notation given a choice.</p> <p>But the Lagrangian is special because it is given as a function of $x$ and $\dot{x}$. So the notation you're seeing can be justified in this context, using the following protocol.</p> <p>First, define</p> <p>$$v(\lambda) = \dot{x}(\lambda)=\frac{dx}{d\lambda}$$</p> <p>Then, think of the symbol $\dot{x}$ as the same thing as $v$; in other words, anywhere you see $\dot{x}$, replace it in your head or on paper with $v$.</p> <p>Now, the Lagrangian is usually given as a function of two variables $L=L(x,\dot{x})$, which you can rewrite</p> <p>$$L(x,v)$$</p> <p>Thus $L$ is just a function of two independent variables $x$ and $v$, ie. it's a function $\mathbb{R}^2\to\mathbb{R}$. So rewriting $\frac{\partial L}{\partial \dot{x}}$ as</p> <p>$$\frac{\partial L}{\partial v}$$</p> <p>gives us something sensible.</p> <p>To sum up, the only reason the notation $\frac{\partial f}{\partial \dot{x}}$ works is because $f$ is given as a function of $x$ and $\dot{x}$. This notation would not make sense for arbitrary smooth functions.</p> Answer 46: <p>Be careful with your second line it's best practice to keep it like $\frac{d}{d\dot{x}}$. It is really as simple as it looks. Given some Langrangian say $L(x,\dot{x})=\frac{1}{2}m\dot{x}^2 - V(x)$. Then $\frac{\partial L}{\partial \dot{x}} = m\dot{x}$. </p> Answer 47: <p>In physics and more generally in functional analysis, when varying an action with respect to the derivative of a variable, say $\dot x$ in your case, we treat $x$ and $\dot x$ as entirely separate. That is,</p> <p>$$\frac{d}{dx}f(\dot x) = 0.$$</p> <p>Likewise, $\frac{d}{d \dot x}f(x)= 0$. This applies to more complicated cases, like for example,</p> <p>$$\frac{\partial}{\partial \phi} f(\partial_\mu\phi) = 0$$</p> <p>in field theory. It is only after we have derived the Euler-Lagrange equations that we identify, $$\dot x = \frac{d}{dt}x$$</p> <p>in order to solve the equations of motion, or say, quantise the system.</p> Answer 48: <p>This answer focus on the important issue of the question, which is explained in its body by OP.</p> <p>Let me present an alternative way of thinking, which purges the "variable" mess, to clear up the confusion arising. Let's consider the Lagrangian $L=Kin-Pot.$</p> <p>The Lagrangian, per se, is a function $L: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ (let's assume it doesn't depend on time, which would simply mean $L: \mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$). No derivatives involved. In this case, it is $$L(p,q)=\frac{1}{2}mq^2-V(p).$$ Now, let $\widetilde{\gamma}$ be the function $t \mapsto (\gamma(t),\dot{\gamma}(t))$.</p> <p>The Euler-Lagrange equations then state that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0,$$ where $\nabla_1L$ means the first $n$ coordinates of the gradient and $\nabla_2L$ means the last $n$ coordinates.</p> <p>Let's compute this in our case. It is clear that $\nabla_1L(p,q)=-\nabla V(p)$, and $\nabla_2 L(p,q)=mq$. Then the Euler-Lagrange equations say that $$(\nabla_1L) \circ \widetilde{\gamma} -\big((\nabla_2L)\circ \widetilde{\gamma}\big)'=0$$ $$\implies (\nabla_1L) (\gamma(t), \dot{\gamma}(t)) -\big((\nabla_2L)(\gamma(t),\dot{\gamma}(t))\big)'=0$$ $$\implies -\nabla V(\gamma(t)) -(m \dot{\gamma}(t))'=0$$ $$\implies -\nabla V(\gamma(t))=m\ddot{\gamma }(t),$$ which is what we would expect.</p> <p>"Differentiating with respect to $\dot{x}$" is just a quicker way to think of the above process. However, it is naturally confusing, since the details of the process go down under the rug.</p>
Singularity positive semidefinite
2,085,040
6
https://math.stackexchange.com/questions/2085040/singularity-positive-semidefinite
<ol> <li>The determinant of a matrix equals the product of its eigenvalues.</li> <li>A positive semidefinite matrix is a symmetric matrix with only nonnegative eigenvalues.</li> <li>A positive definite matrix is a symmetric matrix with only positive eigenvalues.</li> </ol> <p>Combining (1) and (3) yields that a positive definite matrix is always nonsingular since its determinant never becomes zero.</p> <p>Is is true that for a positive semidefinite matrix at least one of its eigenvalues equals zero and thus its determinant always equals zero => a positive semidefinite matrix is always singular?</p> <p>You would say that specifically having a positive semidefinite matrix instead of a positive definite matrix implies that at least one of the eigenvalues equals zero.</p> <p>Is this correct?</p>
[ { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" }, { "answer_id": 2085189, "score": 1, "text": "<p>I see there is a bit of confusion, so I'll try to explain better.</p>\n\n<p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\\ge 0$.</p>\n\n<p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p>\n\n<p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p>\n\n<p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\\sqrt 2$ that are not integer.</p>\n\n<p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p>\n" }, { "answer_id": 3127728, "score": 0, "text": "<p>If a matrix <span class=\"math-container\">$M$</span> is Positive definite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\gt 0$</span>.</p>\n\n<p>If a matrix <span class=\"math-container\">$M$</span> is Positive semidefinite then for all non-zero <span class=\"math-container\">$x$</span>, <span class=\"math-container\">$x^TMx \\ge 0$</span>.</p>\n\n<p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p>\n\n<p>If there is a matrix <span class=\"math-container\">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class=\"math-container\">$S$</span> such that <span class=\"math-container\">$S = QDQ^T$</span> where <span class=\"math-container\">$D$</span> is a real diagonal matrix formed using the eigen values of <span class=\"math-container\">$S$</span>, and <span class=\"math-container\">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>\n" } ]
Question: Singularity positive semidefinite Answer 1: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 2: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 3: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 4: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 5: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 6: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 7: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 8: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 9: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 10: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 11: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 12: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 13: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 14: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 15: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 16: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 17: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 18: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 19: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 20: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 21: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 22: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p> Answer 23: <p>I see there is a bit of confusion, so I'll try to explain better.</p> <p>A matrix $A$ is positive semidefinite (p.sd.) if it is symmetric and all its eigenvalues are $\ge 0$.</p> <p>A matrix $A$ is positive definite (p.d.) if it is symmetric and all its eigenvalues are $&gt;0$.</p> <p>This means that every p.d. matrix is also a p.sd. matrix. The set of positive semidefinite matrices <strong>contains</strong> the set of positive definite matrices, but there are some p.sd. matrices that are not p.d., like the matrix that has all elements equal to zero.</p> <p>As an analogy, you can think about the real and integer numbers: all integer numbers are real numbers, because the set of real numbers contains the set of integer numbers, but there are real numbers, such as $\sqrt 2$ that are not integer.</p> <p>The most common example is the identity matrix $I$: all its eigenvalues are $1&gt;0$, so it is a p.d. matrix, but $1\ge 0$ so it is also a p.sd. matrix. This means that $I$ is a p.sd. matrix and it is invertible.</p> Answer 24: <p>If a matrix <span class="math-container">$M$</span> is Positive definite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \gt 0$</span>.</p> <p>If a matrix <span class="math-container">$M$</span> is Positive semidefinite then for all non-zero <span class="math-container">$x$</span>, <span class="math-container">$x^TMx \ge 0$</span>.</p> <p>So, every positive definite matrix is positive semidefinite, but not vice versa. </p> <p>If there is a matrix <span class="math-container">$S$</span> which is positive semidefinite but not positive definite then at least one of its eigen values is zero, hence it is a singular matrix. For proof, you can use schur factorization to diagonalize the matrix <span class="math-container">$S$</span> such that <span class="math-container">$S = QDQ^T$</span> where <span class="math-container">$D$</span> is a real diagonal matrix formed using the eigen values of <span class="math-container">$S$</span>, and <span class="math-container">$Q$</span> is a real (by construction) unitary (hence full-rank) matrix.</p>
Prove $f[x_0,...,x_{n-1}] = \sum_{i=0}^{n-1}x_i$ if $f(x) = x^n$ for $n\in\mathbb{N}$
2,085,153
6
https://math.stackexchange.com/questions/2085153/prove-fx-0-x-n-1-sum-i-0n-1x-i-if-fx-xn-for-n-in-mathb
<p><strong>Definition</strong> The divided difference of function <span class="math-container">$f$</span> at points <span class="math-container">$x_0, x_1, ..., x_k$</span> is defined recursively as <span class="math-container">$$ f[x]=f(x), \qquad f[x_0,...,x_k] =\frac{f[x_1,...,x_k]-f[x_0,...,x_{k-1}]}{x_k-x_0}, \quad k \ge 1 $$</span></p> <blockquote> <p>Prove that if <span class="math-container">$f(x) = x^n$</span> for <span class="math-container">$n\in\mathbb{N}$</span>, then <span class="math-container">$f[x_0,...,x_{n-1}] = \sum_{i=0}^{n-1}x_i$</span>.</p> </blockquote> <p>I initially thought about induction but the problem is that change in <span class="math-container">$n$</span> causes change in <span class="math-container">$f$</span> and I was unable to derive a useful formula for dependency between <span class="math-container">$a^{n-1}-b^{n-1}$</span> and <span class="math-container">$a^n-b^n$</span>.</p> <p><strong>Example</strong> If <span class="math-container">$f(x) = x^3$</span> then <span class="math-container">$$f[a,b,c]=\frac{f[b,c]-f[a,b]}{c-a}=\frac{\frac{f[c]-f[b]}{c-b}-\frac{f[b]-f[a]}{b-a}}{c-a}=\frac{\frac{c^3-b^3}{c-b}-\frac{a^3-b^3}{a-b}}{c-a}=a+b+c$$</span></p>
[ { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" }, { "answer_id": 3130594, "score": 2, "text": "<p>Here's a sketch of one solution.First see that <span class=\"math-container\">$f[x_0,\\cdots,x_{n-1}]$</span> is a polynomial of degree <span class=\"math-container\">$1$</span>. Use the fact that they are <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">symmetric</a> to see that we must have</p>\n\n<p><span class=\"math-container\">$$f[x_0,\\cdots,x_{n-1}]=k\\sum_{i=0}^{n-1} x_i$$</span></p>\n\n<p>for some <span class=\"math-container\">$k$</span>. Now use their mean value theorem to conclude.</p>\n" }, { "answer_id": 3130666, "score": 2, "text": "<p>We consider Newton's form of the interpolating polynomial. Let <span class=\"math-container\">$f(x) = x^n$</span> and let <span class=\"math-container\">$\\{x_j\\}_{j=0}^n$</span> denote <span class=\"math-container\">$n+1$</span> distinct nodes. Then \n<span class=\"math-container\">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \\dots + f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) + f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i).$$</span>\nWe will exploit the fact that the coefficient of <span class=\"math-container\">$x^{n-1}$</span> is <span class=\"math-container\">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class=\"math-container\">\\begin{align}\n&amp; f[x_0,\\dotsc,x_n] \\prod_{i=0}^{n-1}(x-x_i) \\\\\n&amp; = f[x_0,\\dotsc,x_n] x^n - f[x_0,\\dotsc,x_n] (x_0 + x_1 + \\dots + x_{n-1}) x^{n-1} + O(x^{n-2}),\n\\end{align}</span>\nwhere <span class=\"math-container\">$O(x^{n-2})$</span> means \"terms with <span class=\"math-container\">$x$</span>-degree <span class=\"math-container\">$\\leq n-2$</span>\".\nIt is possible to conclude that <span class=\"math-container\">$$f[x_0,\\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class=\"math-container\">$n-1$</span> and the coefficient of <span class=\"math-container\">$x^n$</span> is <span class=\"math-container\">$1$</span> (alternatively, see <a href=\"https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585\">this post</a>). As for the second to last term we have \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] \\prod_{i=0}^{n-2}(x-x_i) = f[x_0,\\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span>\nSince all other terms have degree at most <span class=\"math-container\">$n-2$</span> we conclude that \n<span class=\"math-container\">$$ f[x_0,\\dotsc,x_{n-1}] = x_0 + x_1 + \\dots + x_{n-1}.$$</span>\nIt is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p>\n" }, { "answer_id": 3130716, "score": 0, "text": "<p>The <a href=\"https://en.wikipedia.org/wiki/Divided_differences#Properties\" rel=\"nofollow noreferrer\">Liebniz rule</a> states that \n<span class=\"math-container\">$$\n(fg)[x_0,\\dots,x_n] = f[x_0]g[x_0,\\dots,x_n] + f[x_0,x_1]g[x_1,\\dots,x_n] + \\dots + f[x_0,\\dots,x_n]g[x_n].\n$$</span></p>\n\n<p>Proceed by induction on <span class=\"math-container\">$n$</span>. For convenience, let <span class=\"math-container\">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p>\n\n<p>Assume that the result holds for some positive integer <span class=\"math-container\">$k$</span>. It can be shown that \n<span class=\"math-container\">\\begin{equation} \nf_1[x_0,\\dots,x_j] = \n\\begin{cases}\n1, &amp; j=1; \\\\\n0, &amp; j&gt;1.\n\\end{cases} \\tag{1} \\label{ddi}\n\\end{equation}</span></p>\n\n<p>Thus, \n<span class=\"math-container\">\\begin{align}\nf_{k+1}[x_0,\\dots,x_k] \n&amp;= \\left(f_1 f_k\\right)[x_0,\\dots,x_k] \\\\\n&amp;= f_1[x_0]f_k[x_0,\\dots,x_k] + \nf_1[x_0, x_1]f_k[x_1,\\dots,x_k] + \\cdots + f_1[x_0,\\dots,x_k]f_k[x_k] \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\\dots,x_k] \\tag{by \\eqref{ddi}} \\\\\n&amp;= x_0 f_k[x_0,\\dots,x_k] + \\sum_{j=1}^{k} x_j. \\tag{IH}\n\\end{align}</span></p>\n\n<p>By the <a href=\"https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)\" rel=\"nofollow noreferrer\">mean value theorem for divided differences</a>, <span class=\"math-container\">$\\exists \\xi \\in \\left[ \\min_j\\{x_j\\},\\max_j\\{x_j\\} \\right]$</span> such that \n<span class=\"math-container\">$$\nf_k[x_0,\\dots,x_k] = \\frac{f_k^{(k)}(\\xi)}{k!}.\n$$</span>\nBut <span class=\"math-container\">$f_k^{(k)}(x) = k!$</span>, so <span class=\"math-container\">$f_k[x_0,\\dots,x_k]=1$</span>. Thus, \n<span class=\"math-container\">$$\nf_{k+1}[x_0,\\dots,x_k] = \\sum_{j=0}^k x_j\n$$</span>\nand the result follows by induction.</p>\n" }, { "answer_id": 3130723, "score": 0, "text": "<p>All of this can be achieved by weak induction.</p>\n\n<p>I'll use <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class=\"math-container\">$f[x_0,x_1,...,x_k]$</span> where <span class=\"math-container\">$f(r)=r^n$</span>.</p>\n\n<p>First, we'll show that <span class=\"math-container\">$f[s,t]_n=\\sum_{a+b=n-1}(s^at^b)$</span> for all <span class=\"math-container\">$n$</span> (where <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> are nonnegative) by simple induction.<br>\n(a proof can be found <a href=\"https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1\">here</a> so you can skip the next paragraph if you want) </p>\n\n<p>Clearly <span class=\"math-container\">$f[s,t]_1=\\sum_{a+b=0}(s^at^b)$</span> as <span class=\"math-container\">$$f[s,t]_1=\\frac{f[t]_1-f[s]_1}{t-s}=\\frac{t-s}{t-s}=1=s^0t^0=\\sum_{a+b=0}(s^at^b)$$</span> </p>\n\n<p>Now, suppose for some <span class=\"math-container\">$p$</span>, <span class=\"math-container\">$f[s,t]_p=\\sum_{a+b=p-1}(s^at^b)$</span>.<br>\nThen, we can relate <span class=\"math-container\">$f[s,t]_{p+1}$</span> and <span class=\"math-container\">$f[s,t]_p$</span> as follows: <span class=\"math-container\">$$f[s,t]_{p+1}=\\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\\frac{t^{p+1}-s^{p+1}}{t-s}=\\frac{t^pt-s^ps}{t-s}=\\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\\\=t^p+s\\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p>\n\n<p>So (by the induction hypothesis): <span class=\"math-container\">$$f[s,t]_{p+1}=t^p+s\\bigg(\\sum_{a+b=p-1}(s^at^b)\\bigg)=t^p+s\\bigg(\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^at^b)\\bigg)=t^p+\\sum_{a+b=p-1,\\\\0\\leq a,b \\leq p-1}(s^{a+1}t^b)$$</span> </p>\n\n<p>Using a change of variable from <span class=\"math-container\">$a+1$</span> to <span class=\"math-container\">$c$</span>: <span class=\"math-container\">$$t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=s^0t^p+\\sum_{c+b=p,\\\\1\\leq c\\leq p\\\\0\\leq b \\leq p-1}(s^ct^b)=\\sum_{c+b=p,\\\\0\\leq c,b\\leq p}(s^ct^b)=\\sum_{c+b=(p+1)-1}(s^ct^b)\\\\=\\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p>\n\n<p>That proves the general formula for <span class=\"math-container\">$f[s,t]_n$</span>.</p>\n\n<p>Now suppose you have shown that <span class=\"math-container\">$f[x_0,x_1,...,x_{k-1}]=\\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class=\"math-container\">$n$</span> for some <span class=\"math-container\">$k$</span>.<br>\n(again, <span class=\"math-container\">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class=\"math-container\">$k=2$</span> was just proved above)</p>\n\n<p>Then: <span class=\"math-container\">$$f[x_0,x_1,...,x_k]=\\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p>\n\n<p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class=\"math-container\">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class=\"math-container\">$x_k$</span> and <span class=\"math-container\">$x_0$</span> will necessarily be the same (because the sum of all powers <span class=\"math-container\">$=n-k+1$</span>) </p>\n\n<p><span class=\"math-container\">$$\\\\=\\frac{\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\\bigg))\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\;f[x_0,x_k]_{i_k})\\\\=\\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\\bigg(\\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\\bigg))$$</span> </p>\n\n<p>Noting that <span class=\"math-container\">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p>\n\n<p><span class=\"math-container\">$$\\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p>\n\n<p>So that does the general formula for <span class=\"math-container\">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class=\"math-container\">$n$</span> and <span class=\"math-container\">$k$</span>). </p>\n\n<p>Now, your question is about what happens when <span class=\"math-container\">$k=n-1$</span>. Well, in that case: <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n=\\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p>\n\n<p>Since all of <span class=\"math-container\">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class=\"math-container\">$i_k$</span>'s to be 1 and the rest to be zero. </p>\n\n<p>So we get <span class=\"math-container\">$$f[x_0,x_1,...,x_{n-1}]_n\\\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\\;.\\;.\\;.\\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\\\=x_0+x_1+...+x_{n-1}=\\sum_{i=0}^{n-1}x_i$$</span></p>\n\n<p>(please comment or edit for any corrections)</p>\n" } ]
Question: Prove $f[x_0,...,x_{n-1}] = \sum_{i=0}^{n-1}x_i$ if $f(x) = x^n$ for $n\in\mathbb{N}$ Answer 1: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 2: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 3: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 4: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 5: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 6: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 7: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 8: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 9: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 10: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 11: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 12: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 13: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 14: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 15: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 16: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 17: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 18: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 19: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 20: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 21: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 22: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 23: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 24: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 25: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 26: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 27: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 28: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 29: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 30: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 31: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 32: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 33: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 34: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 35: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 36: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 37: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 38: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 39: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 40: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 41: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 42: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 43: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 44: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p> Answer 45: <p>Here's a sketch of one solution.First see that <span class="math-container">$f[x_0,\cdots,x_{n-1}]$</span> is a polynomial of degree <span class="math-container">$1$</span>. Use the fact that they are <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">symmetric</a> to see that we must have</p> <p><span class="math-container">$$f[x_0,\cdots,x_{n-1}]=k\sum_{i=0}^{n-1} x_i$$</span></p> <p>for some <span class="math-container">$k$</span>. Now use their mean value theorem to conclude.</p> Answer 46: <p>We consider Newton's form of the interpolating polynomial. Let <span class="math-container">$f(x) = x^n$</span> and let <span class="math-container">$\{x_j\}_{j=0}^n$</span> denote <span class="math-container">$n+1$</span> distinct nodes. Then <span class="math-container">$$x^n = f[x_0] + f[x_0,x_1](x-x_0) + \dots + f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) + f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i).$$</span> We will exploit the fact that the coefficient of <span class="math-container">$x^{n-1}$</span> is <span class="math-container">$0$</span>. We begin by studying the structure of the different terms. The last term is <span class="math-container">\begin{align} &amp; f[x_0,\dotsc,x_n] \prod_{i=0}^{n-1}(x-x_i) \\ &amp; = f[x_0,\dotsc,x_n] x^n - f[x_0,\dotsc,x_n] (x_0 + x_1 + \dots + x_{n-1}) x^{n-1} + O(x^{n-2}), \end{align}</span> where <span class="math-container">$O(x^{n-2})$</span> means "terms with <span class="math-container">$x$</span>-degree <span class="math-container">$\leq n-2$</span>". It is possible to conclude that <span class="math-container">$$f[x_0,\dotsc,x_n] = 1,$$</span> because all other terms have order at most <span class="math-container">$n-1$</span> and the coefficient of <span class="math-container">$x^n$</span> is <span class="math-container">$1$</span> (alternatively, see <a href="https://math.stackexchange.com/questions/95420/show-fx-0-x-1-ldots-x-n-1-for-divided-differences-of-xn/3130585#3130585">this post</a>). As for the second to last term we have <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] \prod_{i=0}^{n-2}(x-x_i) = f[x_0,\dotsc,x_{n-1}] x^{n-1} + O(x^{n-2}).$$</span> Since all other terms have degree at most <span class="math-container">$n-2$</span> we conclude that <span class="math-container">$$ f[x_0,\dotsc,x_{n-1}] = x_0 + x_1 + \dots + x_{n-1}.$$</span> It is worth stressing our use of fact that two polynomials are identical if and only their coefficients are identical. This completes the analysis.</p> Answer 47: <p>The <a href="https://en.wikipedia.org/wiki/Divided_differences#Properties" rel="nofollow noreferrer">Liebniz rule</a> states that <span class="math-container">$$ (fg)[x_0,\dots,x_n] = f[x_0]g[x_0,\dots,x_n] + f[x_0,x_1]g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]g[x_n]. $$</span></p> <p>Proceed by induction on <span class="math-container">$n$</span>. For convenience, let <span class="math-container">$f_n(x) := x^n$</span>. Notice that the base case is trivial. </p> <p>Assume that the result holds for some positive integer <span class="math-container">$k$</span>. It can be shown that <span class="math-container">\begin{equation} f_1[x_0,\dots,x_j] = \begin{cases} 1, &amp; j=1; \\ 0, &amp; j&gt;1. \end{cases} \tag{1} \label{ddi} \end{equation}</span></p> <p>Thus, <span class="math-container">\begin{align} f_{k+1}[x_0,\dots,x_k] &amp;= \left(f_1 f_k\right)[x_0,\dots,x_k] \\ &amp;= f_1[x_0]f_k[x_0,\dots,x_k] + f_1[x_0, x_1]f_k[x_1,\dots,x_k] + \cdots + f_1[x_0,\dots,x_k]f_k[x_k] \\ &amp;= x_0 f_k[x_0,\dots,x_k] + f_1[x_0,x_1]f_k[x_1,\dots,x_k] \tag{by \eqref{ddi}} \\ &amp;= x_0 f_k[x_0,\dots,x_k] + \sum_{j=1}^{k} x_j. \tag{IH} \end{align}</span></p> <p>By the <a href="https://en.wikipedia.org/wiki/Mean_value_theorem_(divided_differences)" rel="nofollow noreferrer">mean value theorem for divided differences</a>, <span class="math-container">$\exists \xi \in \left[ \min_j\{x_j\},\max_j\{x_j\} \right]$</span> such that <span class="math-container">$$ f_k[x_0,\dots,x_k] = \frac{f_k^{(k)}(\xi)}{k!}. $$</span> But <span class="math-container">$f_k^{(k)}(x) = k!$</span>, so <span class="math-container">$f_k[x_0,\dots,x_k]=1$</span>. Thus, <span class="math-container">$$ f_{k+1}[x_0,\dots,x_k] = \sum_{j=0}^k x_j $$</span> and the result follows by induction.</p> Answer 48: <p>All of this can be achieved by weak induction.</p> <p>I'll use <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> to denote <span class="math-container">$f[x_0,x_1,...,x_k]$</span> where <span class="math-container">$f(r)=r^n$</span>.</p> <p>First, we'll show that <span class="math-container">$f[s,t]_n=\sum_{a+b=n-1}(s^at^b)$</span> for all <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are nonnegative) by simple induction.<br> (a proof can be found <a href="https://math.stackexchange.com/questions/2031059/prove-that-an-bn-a-ban-1-an-2b-cdots-bn-1">here</a> so you can skip the next paragraph if you want) </p> <p>Clearly <span class="math-container">$f[s,t]_1=\sum_{a+b=0}(s^at^b)$</span> as <span class="math-container">$$f[s,t]_1=\frac{f[t]_1-f[s]_1}{t-s}=\frac{t-s}{t-s}=1=s^0t^0=\sum_{a+b=0}(s^at^b)$$</span> </p> <p>Now, suppose for some <span class="math-container">$p$</span>, <span class="math-container">$f[s,t]_p=\sum_{a+b=p-1}(s^at^b)$</span>.<br> Then, we can relate <span class="math-container">$f[s,t]_{p+1}$</span> and <span class="math-container">$f[s,t]_p$</span> as follows: <span class="math-container">$$f[s,t]_{p+1}=\frac{f[t]_{p+1}-f[s]_{p+1}}{t-s}=\frac{t^{p+1}-s^{p+1}}{t-s}=\frac{t^pt-s^ps}{t-s}=\frac{t^p(t-s)+s(t^p-s^p)}{t-s}\\=t^p+s\frac{t^p-s^p}{t-s}=t^p+sf[s,t]_p$$</span> </p> <p>So (by the induction hypothesis): <span class="math-container">$$f[s,t]_{p+1}=t^p+s\bigg(\sum_{a+b=p-1}(s^at^b)\bigg)=t^p+s\bigg(\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^at^b)\bigg)=t^p+\sum_{a+b=p-1,\\0\leq a,b \leq p-1}(s^{a+1}t^b)$$</span> </p> <p>Using a change of variable from <span class="math-container">$a+1$</span> to <span class="math-container">$c$</span>: <span class="math-container">$$t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=s^0t^p+\sum_{c+b=p,\\1\leq c\leq p\\0\leq b \leq p-1}(s^ct^b)=\sum_{c+b=p,\\0\leq c,b\leq p}(s^ct^b)=\sum_{c+b=(p+1)-1}(s^ct^b)\\=\sum_{a+b=(p+1)-1}(s^at^b)$$</span> </p> <p>That proves the general formula for <span class="math-container">$f[s,t]_n$</span>.</p> <p>Now suppose you have shown that <span class="math-container">$f[x_0,x_1,...,x_{k-1}]=\sum_{i_0+i_1+...+i_{k-1}=n-k+1}(x_0^{i_0}x_1^{i_1}...x_{k-1}^{i_{k-1}})$</span> for the given <span class="math-container">$n$</span> for some <span class="math-container">$k$</span>.<br> (again, <span class="math-container">$i_0,i_1,...,i_{k-1}$</span> are all nonnegative)(the case for <span class="math-container">$k=2$</span> was just proved above)</p> <p>Then: <span class="math-container">$$f[x_0,x_1,...,x_k]=\frac{f[x_1,x_2,...,x_k]-f[x_0,x_1,...,x_{k-1}]}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_0^{i_1}x_1^{i_2}...x_{k-1}^{i_k})}{x_k-x_0}\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_k^{i_k})-\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}x_0^{i_k})}{x_k-x_0}$$</span> </p> <p>In the above step we have simply relabeled the indices of the powers in the second term to our advantage- all that matters is that whenever the powers appearing on the terms <span class="math-container">$x_1,x_2,...,x_{k-1}$</span> are the same in both sums, the powers appearing on <span class="math-container">$x_k$</span> and <span class="math-container">$x_0$</span> will necessarily be the same (because the sum of all powers <span class="math-container">$=n-k+1$</span>) </p> <p><span class="math-container">$$\\=\frac{\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}(x_k^{i_k}-x_0^{i_k}))}{x_k-x_0}\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\frac{x_k^{i_k}-x_0^{i_k}}{x_k-x_0}\bigg))\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\;f[x_0,x_k]_{i_k})\\=\sum_{i_1+i_2+...+i_k=n-k+1}(x_1^{i_1}x_2^{i_2}...x_{k-1}^{i_{k-1}}\bigg(\sum_{j_1+j_2=i_k-1}x_0^{j_1}x_k^{j_2}\bigg))$$</span> </p> <p>Noting that <span class="math-container">$i_1+i_2+...i_{k-1}+(j_1+j_2)=i_1+i_2+...+i_{k-1}+(i_k-1)=(n-k+1)-1=n-k\\=n-(k+1)+1$</span> and relabeling indices again, we get: </p> <p><span class="math-container">$$\sum_{i_0+i_1+...+i_{k-1}+i_k=n-(k+1)+1}(x_0^{i_0}x_1^{i_1}...x_k^{i_k})$$</span> </p> <p>So that does the general formula for <span class="math-container">$f[x_0,x_1,...,x_k]_n$</span> for us (for all <span class="math-container">$n$</span> and <span class="math-container">$k$</span>). </p> <p>Now, your question is about what happens when <span class="math-container">$k=n-1$</span>. Well, in that case: <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n=\sum_{i_0+i_1+...+i_{n-1}=n-n+1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})=\sum_{i_0+i_1+...+i_{n-1}=1}(x_0^{i_0}x_1^{i_1}...x_{n-1}^{i_{n-1}})$$</span> </p> <p>Since all of <span class="math-container">$i_0,i_1,...,i_{n-1}$</span> are to be nonnegative integers, the only possibilities for their values are for one of the <span class="math-container">$i_k$</span>'s to be 1 and the rest to be zero. </p> <p>So we get <span class="math-container">$$f[x_0,x_1,...,x_{n-1}]_n\\=x_0^1x_1^0x_2^0...x_{n-1}^0+x_0^0x_1^1x_0^0...x_{n-1}^0+x_0^0x_1^0x_2^1...x_{n-1}^0+\;.\;.\;.\; +x_0^0x_1^0x_2^0...x_{n-1}^1\\=x_0+x_1+...+x_{n-1}=\sum_{i=0}^{n-1}x_i$$</span></p> <p>(please comment or edit for any corrections)</p>
Integer solutions to nonlinear system of equations $(x+1)^2+y^2 = (x+2)^2+z^2$ and $(x+2)^2+z^2 = (x+3)^2+w^2$
2,085,253
6
https://math.stackexchange.com/questions/2085253/integer-solutions-to-nonlinear-system-of-equations-x12y2-x22z2-a
<blockquote> <p>Do there exist integers $x,y,z,w$ that satisfy \begin{align*}(x+1)^2+y^2 &amp;= (x+2)^2+z^2\\(x+2)^2+z^2 &amp;= (x+3)^2+w^2?\end{align*}</p> </blockquote> <p>I was thinking about trying to show by contradiction that no such integers exist. The first equation gives $y^2 = 2x+3+z^2$ while the second gives $z^2 = 2x+5+w^2$. How can we find a contradiction from here?</p>
[ { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" }, { "answer_id": 2085294, "score": 8, "text": "<p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$\n$$ 1105 = 5 \\cdot 13 \\cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$\n$$ 12025 = 5^2 \\cdot 13 \\cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$\n$$ 66625 = 5^3 \\cdot 13 \\cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$\n$$ 252601 = 41 \\cdot 61 \\cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$\n$$ 292825 = 5^2 \\cdot 13 \\cdot 17 \\cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$\n$$ 751825 = 5^2 \\cdot 17 \\cdot 29 \\cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p>\n\n<p>INSERT FRIDAY: in order to show that these examples are always $n \\equiv 1 \\pmod 8,$ it suffices to consider everything $\\pmod{16}.$ Odd squares are $1,9 \\pmod {16}.$ Indeed, if $t \\equiv \\pm 1 \\pmod 8$ then $t^2 \\equiv 1 \\pmod{16}.$ if $s \\equiv \\pm 3 \\pmod 8$ then $s^2 \\equiv 9 \\pmod{16}.$ The restriction comes in even squares, which are $0, 4 \\pmod {16}.$ That is most of the argument, that $12$ is not a square $\\pmod {16}.$ Call it a proposition, it is not possible to have\n$$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with\n$$ x_1 \\equiv \\pm 1 \\pmod 8, \\; \\; \\; x_2 \\equiv 2 \\pmod 4, \\; \\; \\; x_3 \\equiv \\pm 3 \\pmod 8. $$</p>\n\n<p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p>\n\n<p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$\n$$ \\color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$\n$$ \\color{red}{w = \\left( 4 m^3 - 1 \\right)^2 + \\left(2 m^2 + 2m \\right)^2} $$\n$$ \\color{red}{w = \\left( 4 m^3 \\right)^2 + \\left(2 m^2 + 1 \\right)^2 }$$\n$$ \\color{red}{ w = \\left( 4 m^3 + 1 \\right)^2 + \\left(2 m^2 - 2m \\right)^2 }$$\nYep, \n$$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 3 \\right)^2 + \\left( 2n^2 + 6n + 4 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 4 \\right)^2 + \\left( 2 n^2 + 4n +3 \\right)^2 $$\n$$ w = \\left( 4 n^3 + 12 n^2 + 12 n + 5 \\right)^2 + \\left( 2 n^2 + 2n \\right)^2 $$</p>\n\n<p>Check algebra:</p>\n\n<pre><code>parisize = 4000000, primelimit = 500509\n? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2\n%1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2\n%2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2\n%3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25\n? u - v\n%4 = 0\n? v - w\n%5 = 0\n? w - u\n%6 = 0\n? \n</code></pre>\n" }, { "answer_id": 2085305, "score": 3, "text": "<p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p>\n\n<p>So $x = -2$ etc.</p>\n" }, { "answer_id": 2085296, "score": 2, "text": "<p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares:\n$$y^2-z^2=a$$</p>\n\n<p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p>\n\n<p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p>\n\n<p>In addition, the solution for $z$ in both cases has to be the same, of course.</p>\n\n<p>Then:</p>\n\n<ul>\n<li>$y=(m+n)/2$</li>\n<li>$z=(m-n)/2=(u+v)/2$</li>\n<li>$w=(u-v)/2$</li>\n<li>$mn-uv=-2$</li>\n</ul>\n\n<p>Since $u=m-n-v$ we have\n$$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p>\n\n<p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is\n$$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p>\n\n<p>Let $s=m-3n$. There exists some $t$ such that\n$$s^2-t^2=8(n^2+1)$$</p>\n\n<p>Note that $s$ and $t$ are even.</p>\n\n<p>We can say for example (there are more possibilities) that\n$$s+t=2n^2+2$$\n$$s-t=4$$\nThen we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also\n$$v=\\frac{n^2+2n+3\\pm(n^2-1)}2=\\begin{cases}n+2\\implies u=n^2+n+1\\\\n^2+n+1\\implies u=n+2\\end{cases}$$</p>\n\n<p>The correct solution is what makes $v\\le u$, that is $u=n^2+n+1$, $v=n+2$.</p>\n\n<p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p>\n\n<p>Namely:</p>\n\n<p>$$\\begin{cases}y=\\frac{(n+1)(n+3)}2\\\\z=\\frac{n^2+2n+3}2\\\\w=\\frac{(n+1)(n-1)}2\\\\x=\\frac{y^2-z^2-3}2\\end{cases}$$</p>\n\n<p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this,\n$$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p>\n" }, { "answer_id": 2085639, "score": 0, "text": "<p>The differences between consecutive squares are consecutive odd numbers. So we have:</p>\n\n<p>$y^2-z^2=$ odd number</p>\n\n<p>$z^2-w^2=$ next odd number</p>\n\n<p>Of course if these were reversed (if the word \"next\" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p>\n\n<p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p>\n\n<p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p>\n\n<hr>\n\n<p>Borrowing from another existing answer, let's check this solution against my statement above:</p>\n\n<p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p>\n\n<p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p>\n\n<p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p>\n\n<p>The first five numbers of course add up to $13\\times5=65$, and the latter three numbers add up to $21\\times3=63$, which illustrates my earlier conclusion.</p>\n\n<hr>\n\n<p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>\n" } ]
Question: Integer solutions to nonlinear system of equations $(x+1)^2+y^2 = (x+2)^2+z^2$ and $(x+2)^2+z^2 = (x+3)^2+w^2$ Answer 1: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 2: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 3: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 4: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 5: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 6: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 7: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 8: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 9: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 10: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 11: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 12: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 13: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 14: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 15: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 16: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 17: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 18: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 19: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 20: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 21: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 22: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 23: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 24: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 25: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 26: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 27: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 28: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 29: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 30: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 31: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 32: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 33: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 34: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 35: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 36: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 37: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 38: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 39: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 40: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 41: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 42: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 43: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 44: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p> Answer 45: <p>$$ 25 = 5^2 = 3^2 + 4^2 = 4^2 + 3^2 = 5^2 + 0^2$$ $$ 1105 = 5 \cdot 13 \cdot 17 = 31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$ $$ 12025 = 5^2 \cdot 13 \cdot 37 = 107^2 + 24^2 = 108^2 + 19^2 = 109^2 + 12^2$$ $$ 66625 = 5^3 \cdot 13 \cdot 41 = 255^2 + 40^2 = 256^2 + 33^2 = 257^2 + 24^2$$ $$ 252601 = 41 \cdot 61 \cdot 101 = 499^2 + 60^2 = 500^2 + 51^2 = 501^2 + 40^2$$ $$ 292825 = 5^2 \cdot 13 \cdot 17 \cdot 53 = 539^2 + 48^2 = 540^2 + 35^2 = 541^2 + 12^2$$ $$ 751825 = 5^2 \cdot 17 \cdot 29 \cdot 61 = 863^2 + 84^2 = 864^2 + 73^2 = 865^2 + 60^2$$</p> <p>INSERT FRIDAY: in order to show that these examples are always $n \equiv 1 \pmod 8,$ it suffices to consider everything $\pmod{16}.$ Odd squares are $1,9 \pmod {16}.$ Indeed, if $t \equiv \pm 1 \pmod 8$ then $t^2 \equiv 1 \pmod{16}.$ if $s \equiv \pm 3 \pmod 8$ then $s^2 \equiv 9 \pmod{16}.$ The restriction comes in even squares, which are $0, 4 \pmod {16}.$ That is most of the argument, that $12$ is not a square $\pmod {16}.$ Call it a proposition, it is not possible to have $$ n = x_1^2 + y_1^2 = x_2^2 + y_2^2 =x_3^2 + y_3^2 $$ with $$ x_1 \equiv \pm 1 \pmod 8, \; \; \; x_2 \equiv 2 \pmod 4, \; \; \; x_3 \equiv \pm 3 \pmod 8. $$</p> <p>Except for 292825 there seems to be a pattern, probably one variable polynomials.</p> <p>INSERT: The pattern is easier to see if we translate the $n$ below as $n=m-1,$ or $m=n+1:$ $$ \color{magenta}{ w = 16 m^6 + 4 m^4 + 4 m^2 + 1} $$ $$ \color{red}{w = \left( 4 m^3 - 1 \right)^2 + \left(2 m^2 + 2m \right)^2} $$ $$ \color{red}{w = \left( 4 m^3 \right)^2 + \left(2 m^2 + 1 \right)^2 }$$ $$ \color{red}{ w = \left( 4 m^3 + 1 \right)^2 + \left(2 m^2 - 2m \right)^2 }$$ Yep, $$ w = 16n^6 + 96n^5 + 244 n^4 + 336 n^3 + 268 n^2 + 120 n + 25 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 3 \right)^2 + \left( 2n^2 + 6n + 4 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 4 \right)^2 + \left( 2 n^2 + 4n +3 \right)^2 $$ $$ w = \left( 4 n^3 + 12 n^2 + 12 n + 5 \right)^2 + \left( 2 n^2 + 2n \right)^2 $$</p> <p>Check algebra:</p> <pre><code>parisize = 4000000, primelimit = 500509 ? u = ( 4 * n^3 + 12 * n^2 + 12 * n + 3)^2 + ( 2 * n^2 + 6 * n + 4)^2 %1 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? v = ( 4 * n^3 + 12 * n^2 + 12 * n + 4)^2 + ( 2 * n^2 + 4 * n + 3)^2 %2 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? w = ( 4 * n^3 + 12 * n^2 + 12 * n + 5)^2 + ( 2 * n^2 + 2 * n + 0)^2 %3 = 16*n^6 + 96*n^5 + 244*n^4 + 336*n^3 + 268*n^2 + 120*n + 25 ? u - v %4 = 0 ? v - w %5 = 0 ? w - u %6 = 0 ? </code></pre> Answer 46: <p>$-1^2 + 0^2 = 0^2 + 1^2 = 1^2 + 0^2$ if you want a trivial example including negative and zero integers.</p> <p>So $x = -2$ etc.</p> Answer 47: <p>We need to find (or prove that it does not exist) an odd number $a$ that is the difference of two squares: $$y^2-z^2=a$$</p> <p>This is quite easy. We only need to write $a$ as the product of two different numbers: $a=mn$. If $m&gt;n$, say, then the easy possbility is that $m=y+z$ and $n=y-z$. We only need to solve for $y$ and $z$ and we are done. The values for $y$ and $z$ are integers because $m$ and $n$ have the same parity (both are odd).</p> <p>Now the hard step is that in addition to the former equation we have to find another square $w^2&lt;z^2$ such that $z^2-w^2=a+2$.</p> <p>In addition, the solution for $z$ in both cases has to be the same, of course.</p> <p>Then:</p> <ul> <li>$y=(m+n)/2$</li> <li>$z=(m-n)/2=(u+v)/2$</li> <li>$w=(u-v)/2$</li> <li>$mn-uv=-2$</li> </ul> <p>Since $u=m-n-v$ we have $$mn-uv=mn-(m-n-v)v=v^2-(m-n)v+mn=-2$$</p> <p>If we see this as a second degree equation on $v$, the discriminant (which must be a square) is $$(m-n)^2-4mn-8=m^2-6mn+n^2-8=(m-3n)^2-8(n^2+1)$$</p> <p>Let $s=m-3n$. There exists some $t$ such that $$s^2-t^2=8(n^2+1)$$</p> <p>Note that $s$ and $t$ are even.</p> <p>We can say for example (there are more possibilities) that $$s+t=2n^2+2$$ $$s-t=4$$ Then we have $s=n^2+3$, $t=n^2-1$. Therefore, $m=n^2+3n+3$. Also $$v=\frac{n^2+2n+3\pm(n^2-1)}2=\begin{cases}n+2\implies u=n^2+n+1\\n^2+n+1\implies u=n+2\end{cases}$$</p> <p>The correct solution is what makes $v\le u$, that is $u=n^2+n+1$, $v=n+2$.</p> <p>Then, for <strong>any</strong> odd value for $n$ we have a solution (at least). The first of Will Jagy's solutions can be obtained for $n=3$.</p> <p>Namely:</p> <p>$$\begin{cases}y=\frac{(n+1)(n+3)}2\\z=\frac{n^2+2n+3}2\\w=\frac{(n+1)(n-1)}2\\x=\frac{y^2-z^2-3}2\end{cases}$$</p> <p><strong>Example</strong>: For $n=13$ we have $m=211$, $u=183$, $v=15$. So $y=112$, $z=99$, $w=84$. And from this, $$1371^2+112^2=1372^2+99^2=1373^2+84^2=1892185$$</p> Answer 48: <p>The differences between consecutive squares are consecutive odd numbers. So we have:</p> <p>$y^2-z^2=$ odd number</p> <p>$z^2-w^2=$ next odd number</p> <p>Of course if these were reversed (if the word "next" were moved one equation up) then the solution would be simple; <em>any</em> three consecutive numbers would work. As it is, it's trickier, but certainly not impossible.</p> <p>Phrasing the equations as I have, a little thought will reveal that any sequence of consecutive odd numbers which can be partitioned such that the smaller numbers add up to an odd number that is $2$ greater than the sum of the larger numbers of the partition, will provide a solution.</p> <p>What's more, <em>any</em> solution to the original equation will exhibit the characteristic described in the preceding paragraph; the two questions are equivalent.</p> <hr> <p>Borrowing from another existing answer, let's check this solution against my statement above:</p> <p>$$31^2 + 12^2 = 32^2 + 9^2 = 33^2 + 4^2$$</p> <p>The odd numbers which comprise the difference between $4^2$ and $9^2$ are $$(4+5), (5+6),(6+7),(7+8),(8+9)$$ or in other terms: $$9,11,13,15,17$$</p> <p>The odd numbers which comprise the difference between $9^2$ and $12^2$ are $$19,21,23$$</p> <p>The first five numbers of course add up to $13\times5=65$, and the latter three numbers add up to $21\times3=63$, which illustrates my earlier conclusion.</p> <hr> <p>And, of course, $65$ and $63$ are the differences between $(x+2)^2$ and $(x+3)^2$, and $(x+1)^2$ and $(x+2)^2$, respectively, which tells us that $x$ will equal half of the smaller of these two odd numbers, after the extra $3$ units are subtracted: $$\frac {63-3} 2=30$$ (This is because the smaller number, $63$, equals $(x+1)+(x+2)$.)</p>
Which symbol can be used to refer to identity matrix?
2,085,613
6
https://math.stackexchange.com/questions/2085613/which-symbol-can-be-used-to-refer-to-identity-matrix
<p>$I$ is commenly used as a notation of identity matrix. I am wondering is there any notation else for identity matrix?</p>
[ { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" }, { "answer_id": 2085625, "score": 3, "text": "<p><strong><a href=\"https://en.m.wikipedia.org/wiki/Identity_matrix\" rel=\"nofollow noreferrer\">From Wikipedia</a></strong>:</p>\n\n<hr>\n\n<p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p>\n\n<hr>\n\n<p>It can also be written using the <a href=\"https://en.m.wikipedia.org/wiki/Kronecker_delta\" rel=\"nofollow noreferrer\">Kronecker delta</a> notation:</p>\n\n<p>$$(I_{n})_{ij}=\\delta _{ij}.$$ Hope it helps. </p>\n" }, { "answer_id": 4350591, "score": 2, "text": "<p>You could also use a &quot;double one&quot; like this: <span class=\"math-container\">$\\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\\mathbb{1}</code>, but this will change the appearance of your other <code>\\mathbb</code> commands.</p>\n" }, { "answer_id": 2085626, "score": 1, "text": "<p>$I_n$ </p>\n\n<p>$I_m$</p>\n\n<p>$I_n = diag(1,1,1,1\\cdots,1)$</p>\n\n<p>$(I_n)_{ij} =\\delta_{ij}$</p>\n" }, { "answer_id": 4166284, "score": 1, "text": "<p>You can use the notation <span class=\"math-container\">$\\mathbb{I}$</span> written using this code <code>\\mathbb{I} </code></p>\n" } ]
Question: Which symbol can be used to refer to identity matrix? Answer 1: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 2: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 3: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 4: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 5: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 6: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 7: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 8: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 9: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 10: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 11: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 12: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 13: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 14: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 15: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 16: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 17: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 18: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 19: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 20: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 21: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 22: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 23: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 24: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 25: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 26: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 27: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 28: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 29: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 30: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 31: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 32: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 33: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 34: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 35: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 36: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 37: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 38: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 39: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 40: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 41: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 42: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 43: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 44: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p> Answer 45: <p><strong><a href="https://en.m.wikipedia.org/wiki/Identity_matrix" rel="nofollow noreferrer">From Wikipedia</a></strong>:</p> <hr> <p>It is denoted by $I_n$, or simply by $I$ if the size is immaterial or can be trivially determined by the context. (In some fields, such as quantum mechanics, the identity matrix is denoted by a boldface one, <strong>1</strong>; otherwise it is identical to $I$.) </p> <hr> <p>It can also be written using the <a href="https://en.m.wikipedia.org/wiki/Kronecker_delta" rel="nofollow noreferrer">Kronecker delta</a> notation:</p> <p>$$(I_{n})_{ij}=\delta _{ij}.$$ Hope it helps. </p> Answer 46: <p>You could also use a &quot;double one&quot; like this: <span class="math-container">$\mathbb{1}$</span>. To write it in LaTeX, you can import the package <code>dsfont</code> and then use <code>\mathds{1}</code>. You can also import <code>bbold</code> and then use <code>\mathbb{1}</code>, but this will change the appearance of your other <code>\mathbb</code> commands.</p> Answer 47: <p>$I_n$ </p> <p>$I_m$</p> <p>$I_n = diag(1,1,1,1\cdots,1)$</p> <p>$(I_n)_{ij} =\delta_{ij}$</p> Answer 48: <p>You can use the notation <span class="math-container">$\mathbb{I}$</span> written using this code <code>\mathbb{I} </code></p>
Proving $a\cos x+b\sin x=\sqrt{a^2+b^2}\cos(x-\alpha)$
2,085,687
6
https://math.stackexchange.com/questions/2085687/proving-a-cos-xb-sin-x-sqrta2b2-cosx-alpha
<blockquote> <p>Show that <span class="math-container">$a\cos x+b\sin x=\sqrt{a^2+b^2}\cos(x-\alpha)$</span> and find the correct phase angle <span class="math-container">$\alpha$</span>.</p> </blockquote> <p>This is my proof.</p> <p><a href="https://i.stack.imgur.com/vKulK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vKulK.png" alt="Triangle" /></a></p> <p>Let <span class="math-container">$x$</span> and <span class="math-container">$\alpha$</span> be the the angles in a right triangle with sides <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span>, as shown in the figure. Then, <span class="math-container">$c=\sqrt{a^2+b^2}$</span>. The left-hand side is <span class="math-container">$a\cos x+b\sin x=\frac{ab}{c}+\frac{ab}{c}=2\frac{ab}{c}$</span>. The right-hand side is <span class="math-container">$\sqrt{a^2+b^2}\cos(x-\alpha)=c\left(\cos{x}\cos{\alpha}+\sin{x}\sin{\alpha}\right)=c\left(\frac{ab}{c^2}+\frac{ab}{c^2}\right)=2\frac{ab}{c}$</span>.</p> <p>Is my proof valid? Is there a more general way to prove it?</p> <p>For the second part of the question, I think it should be <span class="math-container">$\alpha=\arccos\frac{a}{\sqrt{a^2+b^2}}=\arcsin\frac{b}{\sqrt{a^2+b^2}}$</span>. Is this correct?</p>
[ { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" }, { "answer_id": 2085691, "score": 2, "text": "<p>You are largely correct. However you could prove the first part in a very simple way:</p>\n<p>We can write <span class=\"math-container\">$$P= a\\cos x+b\\sin x =\\sqrt {a^2+b^2} \\left[\\frac {a}{\\sqrt {a^2+b^2}}\\cos x+\\frac{b}{\\sqrt {a^2+b^2}}\\sin x \\right] $$</span> Now we can take <span class=\"math-container\">$\\frac{a}{\\sqrt {a^2+b^2}} $</span> as <span class=\"math-container\">$\\cos \\alpha $</span> giving us <span class=\"math-container\">$$P=\\sqrt {a^2+b^2}[\\cos \\alpha \\cos x+\\sin \\alpha \\sin x]=\\sqrt{a^2+b^2}\\cos (x-\\alpha) $$</span> And also <span class=\"math-container\">$$\\alpha =\\arccos \\frac {a}{\\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p>\n<p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p>\n" }, { "answer_id": 2085694, "score": 1, "text": "<p>The proof is only valid for $0&lt;x&lt;\\frac\\pi2$. For a more general proof, let:\n$$a\\cos x+b\\sin x\\ \\equiv\\ R\\cos(x-\\alpha)=R\\cos\\alpha\\cos x + R\\sin\\alpha\\sin x$$\nSo $a=R\\cos\\alpha$ and $b=R\\sin\\alpha$, from which you can readily find $R$ and $\\alpha$.</p>\n" }, { "answer_id": 2085732, "score": 1, "text": "<p>Let $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Then,</p>\n\n<p>$\na\\cos x + b\\sin x\\\\\n= r\\cos \\alpha \\cos x + r\\sin \\alpha \\sin x\\\\\n= r \\cos (x - \\alpha)\n$</p>\n\n<p>since we know that $\\cos \\theta \\cos \\phi + \\sin \\theta \\sin \\phi = \\cos (\\theta - \\phi)$. Now, we already have $a = r \\cos \\alpha$ and $b = r \\sin \\alpha$. Squaring and adding both of these,</p>\n\n<p>$\na^2 + b^2 = r^2(\\cos^2 \\alpha + \\sin^2 \\alpha) = r^2 \\\\\n\\implies r = \\sqrt{a^2 + b^2}\n$</p>\n\n<p>which gives us our desired result.</p>\n" }, { "answer_id": 4021290, "score": 1, "text": "<p>We can prove this by using properties of complex numbers:\n<span class=\"math-container\">$$ \\begin{align}a\\cos(x) + b\\sin(x) &amp;= a \\cos(x) + b \\cos\\left(x-\\frac{\\pi}{2}\\right)\\\\\n &amp;=a \\operatorname{Re}\\left\\{e^{ix}\\right\\} + b \\operatorname{Re}\\left\\{e^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{ae^{ix} + be^{i(x-\\pi/2)}\\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}(a-bi) \\right\\} \\\\\n &amp;= \\operatorname{Re}\\left\\{e^{ix}\\sqrt{a^2+b^2}e^{i\\cdot\\operatorname{atan2}(-b,a)} \\right\\}\\\\\n &amp;= \\operatorname{Re}\\left\\{\\sqrt{a^2+b^2}e^{i(x+\\operatorname{atan2}(-b,a))}\\right\\}\\\\\n &amp;= \\sqrt{a^2+b^2} \\cos(x+\\operatorname{atan2}(-b,a))\n\\end{align}\n$$</span></p>\n<p>where <a href=\"https://en.wikipedia.org/wiki/Atan2\" rel=\"nofollow noreferrer\"><span class=\"math-container\">$\\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class=\"math-container\">$x_0+iy_0$</span>.</p>\n<p>This formula works for all real numbers <span class=\"math-container\">$a$</span> and <span class=\"math-container\">$b$</span> except when <span class=\"math-container\">$a=b=0$</span>.</p>\n<h2>Generalization</h2>\n<p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p>\n<p><span class=\"math-container\">$$ \\begin{align}a\\cos(x+\\alpha) + b \\cos(x+\\beta) &amp;= \\operatorname{Re}\\left\\{ae^{i(x+\\alpha)} + be^{i(x+\\beta)}\\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}\\left(ae^{i\\alpha}+be^{i\\beta}\\right) \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{e^{ix}ce^{i\\gamma} \\right\\}\\\\\n&amp;= \\operatorname{Re}\\left\\{ce^{i(x+\\gamma)} \\right\\}\\\\\n&amp;= c \\cos(x+\\gamma)\n\\end{align}\n$$</span></p>\n<p>where <span class=\"math-container\">$ce^{i\\gamma} = ae^{i\\alpha}+be^{i\\beta}$</span>.</p>\n<p>We have\n<span class=\"math-container\">$$\\begin{align}ce^{i\\gamma}&amp;=a (\\cos(\\alpha)+i\\sin(\\alpha)) +b (\\cos(\\beta)+i\\sin(\\beta))\\\\\n&amp;= (a\\cos(\\alpha)+b\\cos(\\beta)) + (a\\sin(\\alpha)+b\\sin(\\beta))i.\\\\\n\\end{align}\n$$</span></p>\n<p>So\n<span class=\"math-container\">$$c=\\sqrt{(a\\cos(\\alpha)+b\\cos(\\beta))^2+(a\\sin(\\alpha)+b\\sin(\\beta))^2}$$</span>\nand\n<span class=\"math-container\">$$\\gamma=\\operatorname{atan2}\\big(a\\sin(\\alpha)+b\\sin(\\beta),a\\cos(\\alpha)+b\\cos(\\beta)\\big).$$</span></p>\n<p>Sidenotes:</p>\n<ul>\n<li>This result is equivalent to\n<a href=\"https://en.wikipedia.org/wiki/Phasor\" rel=\"nofollow noreferrer\">phasor</a> addition.</li>\n<li>We can get similar formula involving <span class=\"math-container\">$\\sin$</span> by taking the imaginary\npart.</li>\n</ul>\n" } ]
Question: Proving $a\cos x+b\sin x=\sqrt{a^2+b^2}\cos(x-\alpha)$ Answer 1: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 2: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 3: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 4: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 5: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 6: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 7: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 8: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 9: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 10: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 11: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 12: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 13: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 14: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 15: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 16: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 17: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 18: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 19: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 20: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 21: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 22: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 23: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 24: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 25: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 26: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 27: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 28: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 29: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 30: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 31: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 32: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 33: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 34: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 35: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 36: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 37: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 38: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 39: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 40: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 41: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 42: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 43: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 44: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul> Answer 45: <p>You are largely correct. However you could prove the first part in a very simple way:</p> <p>We can write <span class="math-container">$$P= a\cos x+b\sin x =\sqrt {a^2+b^2} \left[\frac {a}{\sqrt {a^2+b^2}}\cos x+\frac{b}{\sqrt {a^2+b^2}}\sin x \right] $$</span> Now we can take <span class="math-container">$\frac{a}{\sqrt {a^2+b^2}} $</span> as <span class="math-container">$\cos \alpha $</span> giving us <span class="math-container">$$P=\sqrt {a^2+b^2}[\cos \alpha \cos x+\sin \alpha \sin x]=\sqrt{a^2+b^2}\cos (x-\alpha) $$</span> And also <span class="math-container">$$\alpha =\arccos \frac {a}{\sqrt {a^2+b^2}} $$</span> which I think maybe a small typo in your calculation.</p> <p>You need not prove the first part with the help of a triangle but your approach is fine. Hope it helps.</p> Answer 46: <p>The proof is only valid for $0&lt;x&lt;\frac\pi2$. For a more general proof, let: $$a\cos x+b\sin x\ \equiv\ R\cos(x-\alpha)=R\cos\alpha\cos x + R\sin\alpha\sin x$$ So $a=R\cos\alpha$ and $b=R\sin\alpha$, from which you can readily find $R$ and $\alpha$.</p> Answer 47: <p>Let $a = r \cos \alpha$ and $b = r \sin \alpha$. Then,</p> <p>$ a\cos x + b\sin x\\ = r\cos \alpha \cos x + r\sin \alpha \sin x\\ = r \cos (x - \alpha) $</p> <p>since we know that $\cos \theta \cos \phi + \sin \theta \sin \phi = \cos (\theta - \phi)$. Now, we already have $a = r \cos \alpha$ and $b = r \sin \alpha$. Squaring and adding both of these,</p> <p>$ a^2 + b^2 = r^2(\cos^2 \alpha + \sin^2 \alpha) = r^2 \\ \implies r = \sqrt{a^2 + b^2} $</p> <p>which gives us our desired result.</p> Answer 48: <p>We can prove this by using properties of complex numbers: <span class="math-container">$$ \begin{align}a\cos(x) + b\sin(x) &amp;= a \cos(x) + b \cos\left(x-\frac{\pi}{2}\right)\\ &amp;=a \operatorname{Re}\left\{e^{ix}\right\} + b \operatorname{Re}\left\{e^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{ae^{ix} + be^{i(x-\pi/2)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}(a-bi) \right\} \\ &amp;= \operatorname{Re}\left\{e^{ix}\sqrt{a^2+b^2}e^{i\cdot\operatorname{atan2}(-b,a)} \right\}\\ &amp;= \operatorname{Re}\left\{\sqrt{a^2+b^2}e^{i(x+\operatorname{atan2}(-b,a))}\right\}\\ &amp;= \sqrt{a^2+b^2} \cos(x+\operatorname{atan2}(-b,a)) \end{align} $$</span></p> <p>where <a href="https://en.wikipedia.org/wiki/Atan2" rel="nofollow noreferrer"><span class="math-container">$\operatorname{atan2}(y_0,x_0)$</span></a> is the angle of the complex number <span class="math-container">$x_0+iy_0$</span>.</p> <p>This formula works for all real numbers <span class="math-container">$a$</span> and <span class="math-container">$b$</span> except when <span class="math-container">$a=b=0$</span>.</p> <h2>Generalization</h2> <p>Following the same procedure, we can find a general formula for the sum of two sinusoids with same frequency but (possibly) different phases:</p> <p><span class="math-container">$$ \begin{align}a\cos(x+\alpha) + b \cos(x+\beta) &amp;= \operatorname{Re}\left\{ae^{i(x+\alpha)} + be^{i(x+\beta)}\right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}\left(ae^{i\alpha}+be^{i\beta}\right) \right\}\\ &amp;= \operatorname{Re}\left\{e^{ix}ce^{i\gamma} \right\}\\ &amp;= \operatorname{Re}\left\{ce^{i(x+\gamma)} \right\}\\ &amp;= c \cos(x+\gamma) \end{align} $$</span></p> <p>where <span class="math-container">$ce^{i\gamma} = ae^{i\alpha}+be^{i\beta}$</span>.</p> <p>We have <span class="math-container">$$\begin{align}ce^{i\gamma}&amp;=a (\cos(\alpha)+i\sin(\alpha)) +b (\cos(\beta)+i\sin(\beta))\\ &amp;= (a\cos(\alpha)+b\cos(\beta)) + (a\sin(\alpha)+b\sin(\beta))i.\\ \end{align} $$</span></p> <p>So <span class="math-container">$$c=\sqrt{(a\cos(\alpha)+b\cos(\beta))^2+(a\sin(\alpha)+b\sin(\beta))^2}$$</span> and <span class="math-container">$$\gamma=\operatorname{atan2}\big(a\sin(\alpha)+b\sin(\beta),a\cos(\alpha)+b\cos(\beta)\big).$$</span></p> <p>Sidenotes:</p> <ul> <li>This result is equivalent to <a href="https://en.wikipedia.org/wiki/Phasor" rel="nofollow noreferrer">phasor</a> addition.</li> <li>We can get similar formula involving <span class="math-container">$\sin$</span> by taking the imaginary part.</li> </ul>
Maximum principle for harmonic functions
2,086,191
6
https://math.stackexchange.com/questions/2086191/maximum-principle-for-harmonic-functions
<p>I know the following classical maximum principle for harmonic functions: </p> <blockquote> <p>If $\Omega \subset \mathbb{C}$ is open and connected and $u \in C^2(\Omega)$ is harmonic, then </p> <p>$u$ has maximum (or minimum) in $\Omega$ $\implies$ $u$ constant.</p> </blockquote> <p>How can I prove that the theorem is true if the hypothesis is that $u$ has a <strong><em>local</em></strong> maximum (or minimum).</p>
[ { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" }, { "answer_id": 2101721, "score": 5, "text": "<p>If $u$ has a local maximum at $x_0 \\in \\Omega$, that means that there is an open neighbourhood $U\\subset \\Omega$ of $x_0$ such that $u\\lvert_U$ has a global maximum at $x_0$. Since clearly $u\\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\\lvert_U$ is constant.</p>\n\n<p>The two harmonic functions $u$ and $v \\colon z \\mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\\mathbb{C}\\cong \\mathbb{R}^2$) implies that $u \\equiv v$ on $\\Omega$, i.e. $u$ is constant.</p>\n\n<blockquote>\n <p><strong>Identity theorem:</strong> Let $W\\subset \\mathbb{R}^n$ a connected open set, and $f,g \\colon W \\to \\mathbb{R}$ real-analytic. If there is a nonempty open set $V\\subset W$ such that $f\\lvert_V \\equiv g\\lvert_V$, then $f \\equiv g$ on all of $W$.</p>\n</blockquote>\n\n<p><strong>Proof:</strong> It suffices to consider $g \\equiv 0$, since $f\\equiv g \\iff f-g \\equiv 0$. Consider the set</p>\n\n<p>$$Z = \\{ x \\in W : \\text{there is a neighbourhood } U \\text{ of } x \\text{ such that } f\\lvert_U \\equiv 0\\}.$$</p>\n\n<p>By its definition, $Z$ is an open subset of $W$ (if $x\\in W$ and $U$ is a neighbourhood of $x$ such that $f\\lvert_U \\equiv 0$, then $y\\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p>\n\n<p>$$Z = \\{ x \\in W : D^{\\alpha}f(x) = 0 \\text{ for all } \\alpha \\in \\mathbb{N}^n\\},$$</p>\n\n<p>for the power series representation of $f$ about $x$,</p>\n\n<p>$$f(y) = \\sum_{\\alpha \\in \\mathbb{N}^n} \\frac{D^{\\alpha}f(x)}{\\alpha!}(y-x)^{\\alpha},$$</p>\n\n<p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\\alpha}f$, the set</p>\n\n<p>$$Z_{\\alpha} = (D^{\\alpha}f)^{-1}(0)$$</p>\n\n<p>is closed (in $W$) for every $\\alpha \\in \\mathbb{N}^n$, and hence so is the intersection</p>\n\n<p>$$Z = \\bigcap_{\\alpha \\in \\mathbb{N}^n} Z_{\\alpha}.$$</p>\n\n<p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \\varnothing$. But $Z \\neq \\varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \\equiv 0$ on $W$.</p>\n" } ]
Question: Maximum principle for harmonic functions Answer 1: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 2: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 3: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 4: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 5: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 6: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 7: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 8: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 9: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 10: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 11: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p> Answer 12: <p>If $u$ has a local maximum at $x_0 \in \Omega$, that means that there is an open neighbourhood $U\subset \Omega$ of $x_0$ such that $u\lvert_U$ has a global maximum at $x_0$. Since clearly $u\lvert_U$ is harmonic on $U$, by the form of the maximum principle that you already know, it follows that $u\lvert_U$ is constant.</p> <p>The two harmonic functions $u$ and $v \colon z \mapsto u(x_0)$ hence coincide on a nonempty open set (namely on $U$), and since $\Omega$ is connected, and harmonic functions are real-analytic, the following identity theorem (the case $n = 2$ with the usual identification $\mathbb{C}\cong \mathbb{R}^2$) implies that $u \equiv v$ on $\Omega$, i.e. $u$ is constant.</p> <blockquote> <p><strong>Identity theorem:</strong> Let $W\subset \mathbb{R}^n$ a connected open set, and $f,g \colon W \to \mathbb{R}$ real-analytic. If there is a nonempty open set $V\subset W$ such that $f\lvert_V \equiv g\lvert_V$, then $f \equiv g$ on all of $W$.</p> </blockquote> <p><strong>Proof:</strong> It suffices to consider $g \equiv 0$, since $f\equiv g \iff f-g \equiv 0$. Consider the set</p> <p>$$Z = \{ x \in W : \text{there is a neighbourhood } U \text{ of } x \text{ such that } f\lvert_U \equiv 0\}.$$</p> <p>By its definition, $Z$ is an open subset of $W$ (if $x\in W$ and $U$ is a neighbourhood of $x$ such that $f\lvert_U \equiv 0$, then $y\in Z$ for all $y$ in the interior of $U$). Since $f$ is real analytic, we can also describe $Z$ as</p> <p>$$Z = \{ x \in W : D^{\alpha}f(x) = 0 \text{ for all } \alpha \in \mathbb{N}^n\},$$</p> <p>for the power series representation of $f$ about $x$,</p> <p>$$f(y) = \sum_{\alpha \in \mathbb{N}^n} \frac{D^{\alpha}f(x)}{\alpha!}(y-x)^{\alpha},$$</p> <p>vanishes in a neighbourhood of $x$ if and only if all coefficients vanish. By continuity of $D^{\alpha}f$, the set</p> <p>$$Z_{\alpha} = (D^{\alpha}f)^{-1}(0)$$</p> <p>is closed (in $W$) for every $\alpha \in \mathbb{N}^n$, and hence so is the intersection</p> <p>$$Z = \bigcap_{\alpha \in \mathbb{N}^n} Z_{\alpha}.$$</p> <p>Thus $Z$ is an open and closed subset of $W$, and since $W$ is connected, we have either $Z = W$ or $Z = \varnothing$. But $Z \neq \varnothing$ was part of the hypotheses of the theorem, so $Z = W$ and indeed $f \equiv 0$ on $W$.</p>
Showing that morphisms of $k$-schemes that agree topologically are equal.
2,086,256
6
https://math.stackexchange.com/questions/2086256/showing-that-morphisms-of-k-schemes-that-agree-topologically-are-equal
<p>Let $k$ be a field, $X$,$Y$ schemes of locally finite type over $k$. Let $X$ be reduced, $k$ algebraically closed. I want to show:</p> <p>Two morphisms $f,g: X \rightarrow Y$ over $k$ are equal if and only if the underlying maps of topological spaces agree. </p> <p>Also I need an example why this fails if the field $k$ is not algebraically closed.</p> <p>Now, the "only if" part is clear, so I only have to prove the other direction. I`ve got some theorems I think one can work with, such as if $f,g$ agree on a dense, closed subset, then they are already equal. I cant seem to get in a position to use them yet though. I think one can try to reduce this problem to the affine case, but I am not really firm with working with schemes, and the whole algebraic geometry does not really suit me. </p> <p>This bears resemblance to another question asked, namely <a href="https://math.stackexchange.com/questions/630465/morphisms-of-k-schemes-who-agree-on-overlinek-points">Morphisms of $k$-schemes who agree on $\overline{k}$-points.</a> I was not able to construct the proof I need from this though, because the accepted answer uses seperatedness of $X$, which is not given or needed. </p> <p>Any help would be appreciated! </p>
[ { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" }, { "answer_id": 2086713, "score": 2, "text": "<p>Let $X=Y=\\textrm{Spec}(\\mathbb{Z}_p[x])$, where $p\\in\\mathbb{P}$, and let $\\varphi,\\psi:X\\to Y$ the morphisms of schemes such that\n\\begin{gather}\n\\varphi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to0\\in\\mathbb{Z}_p[x],\\\\\n\\psi^{\\sharp}:f(x)\\in\\mathbb{Z}_p[x]\\to f(x^p)-f(x)\\in\\mathbb{Z}_p[x];\n\\end{gather}\nas maps of topological spaces $\\varphi=\\psi$ but as morphisms of (affine) schemes $\\left(\\varphi,\\varphi^{\\sharp}\\right)\\neq\\left(\\psi,\\psi^{\\sharp}\\right)$.</p>\n" }, { "answer_id": 2089268, "score": 2, "text": "<p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\\ f,g: Spec(B)\\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p>\n\n<p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \\mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p>\n\n<p>So we have reduced the problem to morphisms $f,g: Spec(B)\\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\\rightarrow B =k[X_1,...X_m]/ \\mathfrak{b}$ for some radical ideal $\\mathfrak{b}$, where $f'^{-1}(\\mathfrak{p})=g'^{-1}(\\mathfrak{p})$ for every prime Ideal $\\mathfrak{p} \\subset B$.</p>\n\n<p>Suppose $f'\\neq g'$. That implies $f'(X)\\neq g'(X)$. Let $p +\\mathfrak{b}=f'(X)$ and $ q +\\mathfrak{b}=g'(X)$. We get $p-q\\notin \\mathfrak{b}$ and since $\\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\\mathfrak{m} \\subset k[X_1,....,X_n]$ which contains $\\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\\notin \\mathfrak{m}/\\mathfrak{b}.$</p>\n\n<p>Extending the Morphisms to $k[X_1,...,X_m]/\\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\\mathfrak{m})=g'^{-1}(\\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\\mathfrak{m}\\cong k$.</p>\n\n<p>Let $\\alpha$ be the Element in $k$ that corresponds to $\\pi\\circ f'(X)$, then $\\pi\\circ f'(X-\\alpha)=0 \\in $ and therefore by our assumption on $f',g':$ $\\pi\\circ g'(X-\\alpha)=0$ so $\\pi(f'(X-\\alpha)-g'(X-\\alpha))=\\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p>\n\n<p>As to the counterexample, just try $f;g: \\Bbb{R} [X]\\rightarrow\\Bbb{C}\\space X\\mapsto i\\space ;\\space X\\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\\Bbb{C}$ that means the respective topological maps agree.</p>\n" } ]
Question: Showing that morphisms of $k$-schemes that agree topologically are equal. Answer 1: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 2: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 3: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 4: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 5: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 6: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 7: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 8: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 9: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 10: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 11: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 12: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 13: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 14: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 15: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 16: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 17: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 18: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 19: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 20: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 21: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 22: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p> Answer 23: <p>Let $X=Y=\textrm{Spec}(\mathbb{Z}_p[x])$, where $p\in\mathbb{P}$, and let $\varphi,\psi:X\to Y$ the morphisms of schemes such that \begin{gather} \varphi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to0\in\mathbb{Z}_p[x],\\ \psi^{\sharp}:f(x)\in\mathbb{Z}_p[x]\to f(x^p)-f(x)\in\mathbb{Z}_p[x]; \end{gather} as maps of topological spaces $\varphi=\psi$ but as morphisms of (affine) schemes $\left(\varphi,\varphi^{\sharp}\right)\neq\left(\psi,\psi^{\sharp}\right)$.</p> Answer 24: <p>Since Y is locally of finite type and it is sufficient to show, that the morphisms agree locally we can reduce to $\ f,g: Spec(B)\rightarrow Spec(A)$ where A and B are finitely generated k-Algebras and B is reduced. </p> <p>Now since $Spec(A)= Spec(k[X_1,...X_n]/ \mathfrak{a})$ we have a closed immersion into $A_k^n$. Closed Immersions are Monos, so w.l.o.g. we can assume $Y=A_k^n$. The later is the fiberproduct of n $A_k^1$. By the universal property of the fiberproduct we can finaly reduce to $Y=A_k^1$.</p> <p>So we have reduced the problem to morphisms $f,g: Spec(B)\rightarrow A_k^1$ i. e. morphisms of k-Algebras $f',g':k[X]\rightarrow B =k[X_1,...X_m]/ \mathfrak{b}$ for some radical ideal $\mathfrak{b}$, where $f'^{-1}(\mathfrak{p})=g'^{-1}(\mathfrak{p})$ for every prime Ideal $\mathfrak{p} \subset B$.</p> <p>Suppose $f'\neq g'$. That implies $f'(X)\neq g'(X)$. Let $p +\mathfrak{b}=f'(X)$ and $ q +\mathfrak{b}=g'(X)$. We get $p-q\notin \mathfrak{b}$ and since $\mathfrak{b}$ is a radical ideal and $k[X_1,....,X_n]$ jacobson we get some maximal ideal $\mathfrak{m} \subset k[X_1,....,X_n]$ which contains $\mathfrak{b}$ and not $p-q$, , i.e. $f'(X)-g'(X)\notin \mathfrak{m}/\mathfrak{b}.$</p> <p>Extending the Morphisms to $k[X_1,...,X_m]/\mathfrak{m}$ we get morphisms with kernel $f'^{-1}(\mathfrak{m})=g'^{-1}(\mathfrak{m})$. Since $k$ is algebraically closed $k[X_1,...,X_m]/\mathfrak{m}\cong k$.</p> <p>Let $\alpha$ be the Element in $k$ that corresponds to $\pi\circ f'(X)$, then $\pi\circ f'(X-\alpha)=0 \in $ and therefore by our assumption on $f',g':$ $\pi\circ g'(X-\alpha)=0$ so $\pi(f'(X-\alpha)-g'(X-\alpha))=\pi(f'(X)-g'(X))=0$ . Which is a contradiction.</p> <p>As to the counterexample, just try $f;g: \Bbb{R} [X]\rightarrow\Bbb{C}\space X\mapsto i\space ;\space X\mapsto-i.$ They obviously have the same kernel, and since 0 is the only prime in $\Bbb{C}$ that means the respective topological maps agree.</p>
roots of quadratic polynomials are positive integer numbers
2,087,003
6
https://math.stackexchange.com/questions/2087003/roots-of-quadratic-polynomials-are-positive-integer-numbers
<p>Find positive integer numbers $a,b,c$ such that all the roots of these quadratic polynomials are positive integer numbers</p> <p>1) $x^2−2ax+b=0$</p> <p>2) $x^2−2bx+c=0$</p> <p>3) $x^2−2cx+a=0$</p> <p>Sorry but I don't know what to do :(( please help me :((</p>
[ { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" }, { "answer_id": 2087014, "score": 4, "text": "<p>The only solution in positive integers is $a=b=c=1$.</p>\n\n<p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \\lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p>\n\n<p>$$a^2-b \\le (a-1)^2 = a^2 - 2a +1 \\quad \\iff \\quad b + 1 \\ge 2a$$</p>\n\n<p>Adding the $3$ similar inequalities together gives:</p>\n\n<p>$$a+b+c+3 \\ge 2(a+b+c) \\quad \\iff \\quad a+b+c \\le 3$$</p>\n\n<p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p>\n" }, { "answer_id": 2087017, "score": 2, "text": "<p>Hint:</p>\n\n<p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>\n" } ]
Question: roots of quadratic polynomials are positive integer numbers Answer 1: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 2: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 3: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 4: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 5: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 6: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 7: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 8: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 9: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 10: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 11: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 12: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 13: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 14: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 15: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 16: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 17: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 18: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 19: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 20: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 21: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 22: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p> Answer 23: <p>The only solution in positive integers is $a=b=c=1$.</p> <p>For the equations to have integer roots, their discriminants must be perfect squares. Taking the first equation for example, this means $a^2-b$ must be a perfect square. But $a^2-b \lt a^2$ thus it must be less than or equal the next smaller perfect square below $a^2$ which is $(a-1)^2$. Therefore:</p> <p>$$a^2-b \le (a-1)^2 = a^2 - 2a +1 \quad \iff \quad b + 1 \ge 2a$$</p> <p>Adding the $3$ similar inequalities together gives:</p> <p>$$a+b+c+3 \ge 2(a+b+c) \quad \iff \quad a+b+c \le 3$$</p> <p>The latter has $a=b=c=1$ as the unique solution in positive integers, in which case all $3$ equations reduce to $x^2-2x+1=(x-1)^2=0$ with the positive integer $1$ as a double root.</p> Answer 24: <p>Hint:</p> <p>The trivial case where $a=b=c=1$ gives you the monic polynomial $x^2 - 2x + 1$ which by Descarte's Rule of signs has two positive roots and by rational root theorem has two roots both $x=1$.</p>
Normalizer of upper triangular group in ${\rm GL}(n,F)$
2,087,062
6
https://math.stackexchange.com/questions/2087062/normalizer-of-upper-triangular-group-in-rm-gln-f
<p>The following question has already appeared on mathstack:</p> <blockquote> <p>If $B$ is the subgroup of ${\rm GL}(n,F)$ consisting of upper triangular matrices then normalizer of $B$ in ${\rm GL}(n,F)$ is $B$ itself.</p> </blockquote> <p>I know a proof of this using Bruhat decomposition of ${\rm GL}(n,F)$. </p> <p><strong>Question:</strong> Can we prove above theorem without using Bruhat decomposition?</p> <hr> <p><strong>Why came to this question:</strong> Consider the general linear Lie algebra $L=\mathfrak{gl}(n,F)$; in it, let $T=\mathfrak{t}(n,F)$ be the upper triangular sub-algebra. Then normalizer of $T$ in $L$ is $T$ itslef, and this can be proved just by considering a very simple decomposition of ${\mathfrak gl}(n,F)$: write any element as <em>sum</em> of upper triangular matrix and lower triangular matrix whose diagonal is $0$. </p> <p>But then for problem above, is it <em>necessary</em> to use Bruhat decomposition?</p>
[ { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" }, { "answer_id": 2087257, "score": 9, "text": "<p>$\\DeclareMathOperator{\\GL}{GL}$$\\newcommand{\\Span}[1]{\\left\\langle #1 \\right\\rangle}$$\\newcommand{\\Set}[1]{\\left\\{ #1 \\right\\}}$Let $e_0, e_1, \\dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write\n$$\nV_i = \\Span{ e_j : j \\ge i}.\n$$\nAllow me to use row vectors, so that the group $G = \\GL(n, F)$ acts on the right.</p>\n\n<p>Then \n$$\nB = \\Set{b \\in G : V_i b \\subseteq V_i \\text{ for each $i$}}.\n$$\nMoreover, </p>\n\n<blockquote>\n <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \\subseteq W$. </p>\n</blockquote>\n\n<p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \\times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p>\n\n<p>Let $g \\in N_{G}(B)$. Then for each $b \\in B$ we have $g b g^{-1} \\in B$, that is for all $i$\n$$\nV_{i} g b g^{-1} \\subseteq V_i\n$$\nor\n$$\n(V_{i} g) b \\subseteq V_i g.\n$$\nIt follows from the above that $V_{i} g = V_{i}$, so that $g \\in B$.</p>\n" }, { "answer_id": 3058282, "score": 8, "text": "<p>It can be shown by direct computation. First, we need the following lemma.</p>\n\n<hr>\n\n<blockquote>\n <p><strong>Lemma.</strong> Let <span class=\"math-container\">$g\\in GL(n,F)$</span>. Let <span class=\"math-container\">$h:=g^{-1}$</span>. If there are entries of <span class=\"math-container\">$g$</span> and <span class=\"math-container\">$h$</span> such that \n <span class=\"math-container\">$$g_{ij}\\neq 0,h_{kj}\\neq 0,i&gt;j,k\\geq j,$$</span>\n then <span class=\"math-container\">$g$</span> is not in <span class=\"math-container\">$N(B)$</span>, the normalizer of <span class=\"math-container\">$B$</span>.</p>\n</blockquote>\n\n<p>(Proof)Let <span class=\"math-container\">$b=I+E_{jk}$</span>, where <span class=\"math-container\">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class=\"math-container\">$(j,k)$</span>-entry. Since <span class=\"math-container\">$k\\geq j$</span>, <span class=\"math-container\">$b$</span> is an element of <span class=\"math-container\">$B$</span>. Notice that <span class=\"math-container\">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\\neq 0$</span>. So <span class=\"math-container\">$gbg^{-1}$</span> is not upper triangular(, for <span class=\"math-container\">$i&gt;j$</span>). Hence <span class=\"math-container\">$gbg^{-1}\\not \\in B$</span>. Therefore <span class=\"math-container\">$g\\not \\in N(B)$</span>. <span class=\"math-container\">$\\square$</span></p>\n\n<hr>\n\n<p>Now we prove that <span class=\"math-container\">$N(B)=B$</span> by induction on <span class=\"math-container\">$n$</span>. </p>\n\n<p>Let us denote by <span class=\"math-container\">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class=\"math-container\">$N(B_n)$</span> its normalizer in <span class=\"math-container\">$GL(n,F)$</span>. Assuming that <span class=\"math-container\">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class=\"math-container\">$N(B_{n})=B_{n}$</span>.</p>\n\n<p>Suppose that <span class=\"math-container\">$g_0\\in N(B)$</span>. Let <span class=\"math-container\">$h_0:=g_0^{-1}$</span>. Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class=\"math-container\">$(g_0)_{i,1}\\neq 0, (h_0)_{k,1}\\neq 0$</span> for some <span class=\"math-container\">$i,k$</span>, then the above lemma implies that <span class=\"math-container\">$i=1$</span>. On the other hand, Since <span class=\"math-container\">$g_0$</span> and <span class=\"math-container\">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class=\"math-container\">$g_0$</span> has the form<span class=\"math-container\">$$g_0=\\left[\\begin{array}{cccc}\nd_1 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span>\nWe then multiply a product <span class=\"math-container\">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class=\"math-container\">$B$</span>, to obtain an element <span class=\"math-container\">$g_1:=g_0c_1$</span> of the form\n <span class=\"math-container\">$$g_{1}=\\left[\\begin{array}{cccc}\nd_1 &amp; 0 &amp; \\cdots &amp; 0\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\\\\\n\\vdots &amp; &amp; \\vdots\\\\\n0 &amp; \\ast &amp; \\cdots &amp; \\ast\n\\end{array}\\right].$$</span></p>\n\n<p>The <span class=\"math-container\">$(n-1)\\times (n-1)$</span> matrix <span class=\"math-container\">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class=\"math-container\">$g_1$</span> is in <span class=\"math-container\">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class=\"math-container\">$g_1'\\in B_{n-1}$</span>. It follows that <span class=\"math-container\">$g_1\\in B_{n}$</span>. Hence <span class=\"math-container\">$N(B_{n})\\subset B_{n}$</span>.The converse inclusion holds trivially.</p>\n" } ]
Question: Normalizer of upper triangular group in ${\rm GL}(n,F)$ Answer 1: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 2: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 3: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 4: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 5: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 6: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 7: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 8: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 9: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 10: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 11: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 12: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 13: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 14: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 15: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 16: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 17: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 18: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 19: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 20: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 21: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 22: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p> Answer 23: <p>$\DeclareMathOperator{\GL}{GL}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$$\newcommand{\Set}[1]{\left\{ #1 \right\}}$Let $e_0, e_1, \dots, e_{n-1}$ be a basis with respect to which $B$ is upper-triangular, and write $$ V_i = \Span{ e_j : j \ge i}. $$ Allow me to use row vectors, so that the group $G = \GL(n, F)$ acts on the right.</p> <p>Then $$ B = \Set{b \in G : V_i b \subseteq V_i \text{ for each $i$}}. $$ Moreover, </p> <blockquote> <p>$V_{n-i}$ is the unique subspace $W$ of dimension $i$ such that $W B \subseteq W$. </p> </blockquote> <p>This is proved by induction on $i$. For $i = 1$ we have that $V_{n-1}$ is the unique common eigenspace for the elements of $B$ (just consider the element of $B$ which is a single Jordan block of size $n \times n$ and eigenvalue $1$, say), then pass to $V / V_{n-1}$ and use induction.</p> <p>Let $g \in N_{G}(B)$. Then for each $b \in B$ we have $g b g^{-1} \in B$, that is for all $i$ $$ V_{i} g b g^{-1} \subseteq V_i $$ or $$ (V_{i} g) b \subseteq V_i g. $$ It follows from the above that $V_{i} g = V_{i}$, so that $g \in B$.</p> Answer 24: <p>It can be shown by direct computation. First, we need the following lemma.</p> <hr> <blockquote> <p><strong>Lemma.</strong> Let <span class="math-container">$g\in GL(n,F)$</span>. Let <span class="math-container">$h:=g^{-1}$</span>. If there are entries of <span class="math-container">$g$</span> and <span class="math-container">$h$</span> such that <span class="math-container">$$g_{ij}\neq 0,h_{kj}\neq 0,i&gt;j,k\geq j,$$</span> then <span class="math-container">$g$</span> is not in <span class="math-container">$N(B)$</span>, the normalizer of <span class="math-container">$B$</span>.</p> </blockquote> <p>(Proof)Let <span class="math-container">$b=I+E_{jk}$</span>, where <span class="math-container">$E_{jk}$</span> is the matrix whose entries are all zero except for the <span class="math-container">$(j,k)$</span>-entry. Since <span class="math-container">$k\geq j$</span>, <span class="math-container">$b$</span> is an element of <span class="math-container">$B$</span>. Notice that <span class="math-container">$(gbg^{-1})_{ij}=(g)_{ij}(h)_{kj}\neq 0$</span>. So <span class="math-container">$gbg^{-1}$</span> is not upper triangular(, for <span class="math-container">$i&gt;j$</span>). Hence <span class="math-container">$gbg^{-1}\not \in B$</span>. Therefore <span class="math-container">$g\not \in N(B)$</span>. <span class="math-container">$\square$</span></p> <hr> <p>Now we prove that <span class="math-container">$N(B)=B$</span> by induction on <span class="math-container">$n$</span>. </p> <p>Let us denote by <span class="math-container">$B_n$</span> the set of invertible upper triangular matrices of size n and by <span class="math-container">$N(B_n)$</span> its normalizer in <span class="math-container">$GL(n,F)$</span>. Assuming that <span class="math-container">$N(B_{n-1})=B_{n-1}$</span>, we prove <span class="math-container">$N(B_{n})=B_{n}$</span>.</p> <p>Suppose that <span class="math-container">$g_0\in N(B)$</span>. Let <span class="math-container">$h_0:=g_0^{-1}$</span>. Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. If we have <span class="math-container">$(g_0)_{i,1}\neq 0, (h_0)_{k,1}\neq 0$</span> for some <span class="math-container">$i,k$</span>, then the above lemma implies that <span class="math-container">$i=1$</span>. On the other hand, Since <span class="math-container">$g_0$</span> and <span class="math-container">$h_0$</span> are invertible, neither of them has a column of zero. Hence <span class="math-container">$g_0$</span> has the form<span class="math-container">$$g_0=\left[\begin{array}{cccc} d_1 &amp; \ast &amp; \cdots &amp; \ast\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span> We then multiply a product <span class="math-container">$c_1$</span> of elementary matrices that corresponds to adding multiples of the first column to other columns, which is in <span class="math-container">$B$</span>, to obtain an element <span class="math-container">$g_1:=g_0c_1$</span> of the form <span class="math-container">$$g_{1}=\left[\begin{array}{cccc} d_1 &amp; 0 &amp; \cdots &amp; 0\\ 0 &amp; \ast &amp; \cdots &amp; \ast\\ \vdots &amp; &amp; \vdots\\ 0 &amp; \ast &amp; \cdots &amp; \ast \end{array}\right].$$</span></p> <p>The <span class="math-container">$(n-1)\times (n-1)$</span> matrix <span class="math-container">$g_1'$</span> that is obtained by deleting the first row and the first column of <span class="math-container">$g_1$</span> is in <span class="math-container">$N(B_{n-1})$</span>; thus by induction hypothesis, <span class="math-container">$g_1'\in B_{n-1}$</span>. It follows that <span class="math-container">$g_1\in B_{n}$</span>. Hence <span class="math-container">$N(B_{n})\subset B_{n}$</span>.The converse inclusion holds trivially.</p>
Prove $\int_{B(x,r)}|\nabla u|^2\leq \frac{C}{r^2}\int_{B(x,2r)}|u|^2$ if $-\Delta u(x)+f(x)u(x)=0.$
2,087,724
6
https://math.stackexchange.com/questions/2087724/prove-int-bx-r-nabla-u2-leq-fraccr2-int-bx-2ru2-if-del
<p>Let $\Omega $ a smooth domain of $\mathbb R^d$ ($d\geq 2$), $f\in \mathcal C(\overline{\Omega })$. Let $u\in \mathcal C^2(\overline{\Omega })$ solution of $$-\Delta u(x)+f(x)u(x)=0\ \ in\ \ \Omega .$$ Assume that $f(x)\geq 0$ for $x\in \Omega $. Prove that $$\int_{B(x,r)}|\nabla u|^2\leq \frac{C}{r^2}\int_{B(x,2r)}|u|^2,$$ for all $x\in \Omega $ and $r&gt;0$, $B(x,2r)\subset \subset \Omega $ for somme $C\geq 0$ independant of $u,f,x$ and $r$.</p> <p><strong>My attempts</strong></p> <p>Using divergence theorem and that $\Delta u=fu$ in $\Omega $, I have that $$\int_{B(x,r)}|\nabla u|^2=\int_{B(x,r)}\text{div}(u\nabla u)-\int_{B(x,r)}u\Delta u=\int_{\partial B(x,r)}u\nabla u\cdot \nu-\int_{B(x,r)}fu^2.$$</p> <p>But I can't do better. Any help would be welcome.</p>
[ { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" }, { "answer_id": 2087817, "score": 5, "text": "<p>Suppose that $\\varphi : \\bar{B}(x,2r) \\to [0,\\infty)$ is Lipschitz and $\\varphi =0$ on $\\partial B(x,2r)$. Note that since $\\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then \n$$\n0 \\le \\int_{B(x,2r)} \\varphi^2 f u^2 = \\int_{B(x,2r)} \\varphi^2 u \\Delta u = \\int_{B(x,2r)} - \\nabla (\\varphi^2 u) \\cdot \\nabla u\n$$<br>\nsince $\\varphi$ vanishes on $\\partial B(x,2r)$. Since \n$$\n\\nabla(\\varphi^2 u) = 2 \\varphi \\nabla \\varphi u + \\varphi^2 \\nabla u\n$$\nwe find, upon plugging in above, that \n$$\n0 \\le -\\int_{B(x,2r)}2 \\varphi u \\nabla \\varphi \\cdot \\nabla u + \\varphi^2 |\\nabla u|^2,\n$$\nand so \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le \\int_{B(x,2r)} -2 \\varphi u \\nabla \\varphi \\cdot \\nabla u \\le 2 \\left(\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2\\right)^{1/2} \\left(\\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 \\right)^{1/2}.\n$$\nFrom this we then see that \n$$\n\\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2. \n$$</p>\n\n<p>With the last inequality in hand we can prove the result. Set \n$$\n\\varphi(y) =\n\\begin{cases}\n1 &amp; \\text{if } |x-y| \\le r \\\\\n2 - |x-y|/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nIt's easy to see that $\\varphi \\ge 0$, $\\varphi$ vanishes on the boundary, and $\\varphi$ is Lipschitz. Also \n$$\n|\\nabla \\varphi(y)| =\n\\begin{cases}\n0 &amp; \\text{if } |x-y| \\le r \\\\\n1/r &amp; \\text{if } r &lt; |x-y| \\le 2r. \n\\end{cases}\n$$\nPlugging this in above then gives the inequality\n$$\n\\int_{B(x,r)} |\\nabla u|^2 \\le \\int_{B(x,2r)} \\varphi^2 |\\nabla u|^2 \\le 4 \\int_{B(x,2r)} u^2 |\\nabla \\varphi|^2 = \\frac{4}{r^2} \\int_{B(x,2r) \\backslash B(x,r)} u^2.\n$$\nThis is actually stronger than the desired inequality.</p>\n" }, { "answer_id": 2087820, "score": 4, "text": "<p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\\xi$ to have the following property\n$$ \\xi=\\left\\{\\begin{array}{lcr}1,&amp;\\text{if }x\\in B_r\\\\\n\\ge 0,&amp;\\text{if }x\\in B_{2r}\\setminus B_r\\\\\n0,&amp;\\text{if }x\\in\\Omega\\setminus B_{2r},\n\\end{array}\n\\right. $$\nsatisfying $|\\nabla \\xi|\\le \\frac{C}{r}$. Using $u\\xi^2$ as a test function in the equation, one has\n$$ \\int_{\\Omega}\\nabla u\\nabla(u\\xi^2)dx=-\\int_{\\Omega}fu^2\\xi^2dx\\le0. $$\nThus\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le-2\\int_{\\Omega}(\\xi\\nabla u)(u\\nabla\\xi) dx. $$\nBy using $2ab\\le \\frac{1}{2}a^2+2b^2$, one has\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le \\frac12\\int_{\\Omega}\\xi^2|\\nabla u|^2dx+2\\int_{\\Omega}|u\\nabla\\xi|^2 dx $$\nand hence\n$$ \\int_{\\Omega}\\xi^2|\\nabla u|^2dx\\le 4\\int_{\\Omega}|u\\nabla\\xi|^2 dx. $$\nSo using the property of $\\xi$, one has\n$$ \\int_{B_r}|\\nabla u|^2dx\\le \\frac{C}{r^2}\\int_{B_{2r}}|u|^2 dx. $$</p>\n" } ]
Question: Prove $\int_{B(x,r)}|\nabla u|^2\leq \frac{C}{r^2}\int_{B(x,2r)}|u|^2$ if $-\Delta u(x)+f(x)u(x)=0.$ Answer 1: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 2: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 3: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 4: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 5: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 6: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 7: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 8: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 9: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 10: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 11: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 12: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 13: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 14: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 15: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 16: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 17: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 18: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 19: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 20: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 21: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 22: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p> Answer 23: <p>Suppose that $\varphi : \bar{B}(x,2r) \to [0,\infty)$ is Lipschitz and $\varphi =0$ on $\partial B(x,2r)$. Note that since $\varphi$ is Lipschitz, it is differentiable almost everywhere by Rademacher's theorem. Then $$ 0 \le \int_{B(x,2r)} \varphi^2 f u^2 = \int_{B(x,2r)} \varphi^2 u \Delta u = \int_{B(x,2r)} - \nabla (\varphi^2 u) \cdot \nabla u $$<br> since $\varphi$ vanishes on $\partial B(x,2r)$. Since $$ \nabla(\varphi^2 u) = 2 \varphi \nabla \varphi u + \varphi^2 \nabla u $$ we find, upon plugging in above, that $$ 0 \le -\int_{B(x,2r)}2 \varphi u \nabla \varphi \cdot \nabla u + \varphi^2 |\nabla u|^2, $$ and so $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le \int_{B(x,2r)} -2 \varphi u \nabla \varphi \cdot \nabla u \le 2 \left(\int_{B(x,2r)} \varphi^2 |\nabla u|^2\right)^{1/2} \left(\int_{B(x,2r)} u^2 |\nabla \varphi|^2 \right)^{1/2}. $$ From this we then see that $$ \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2. $$</p> <p>With the last inequality in hand we can prove the result. Set $$ \varphi(y) = \begin{cases} 1 &amp; \text{if } |x-y| \le r \\ 2 - |x-y|/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ It's easy to see that $\varphi \ge 0$, $\varphi$ vanishes on the boundary, and $\varphi$ is Lipschitz. Also $$ |\nabla \varphi(y)| = \begin{cases} 0 &amp; \text{if } |x-y| \le r \\ 1/r &amp; \text{if } r &lt; |x-y| \le 2r. \end{cases} $$ Plugging this in above then gives the inequality $$ \int_{B(x,r)} |\nabla u|^2 \le \int_{B(x,2r)} \varphi^2 |\nabla u|^2 \le 4 \int_{B(x,2r)} u^2 |\nabla \varphi|^2 = \frac{4}{r^2} \int_{B(x,2r) \backslash B(x,r)} u^2. $$ This is actually stronger than the desired inequality.</p> Answer 24: <p>Denote $B(x,r)$ by $B_r$ and choose a smooth function $\xi$ to have the following property $$ \xi=\left\{\begin{array}{lcr}1,&amp;\text{if }x\in B_r\\ \ge 0,&amp;\text{if }x\in B_{2r}\setminus B_r\\ 0,&amp;\text{if }x\in\Omega\setminus B_{2r}, \end{array} \right. $$ satisfying $|\nabla \xi|\le \frac{C}{r}$. Using $u\xi^2$ as a test function in the equation, one has $$ \int_{\Omega}\nabla u\nabla(u\xi^2)dx=-\int_{\Omega}fu^2\xi^2dx\le0. $$ Thus $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le-2\int_{\Omega}(\xi\nabla u)(u\nabla\xi) dx. $$ By using $2ab\le \frac{1}{2}a^2+2b^2$, one has $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le \frac12\int_{\Omega}\xi^2|\nabla u|^2dx+2\int_{\Omega}|u\nabla\xi|^2 dx $$ and hence $$ \int_{\Omega}\xi^2|\nabla u|^2dx\le 4\int_{\Omega}|u\nabla\xi|^2 dx. $$ So using the property of $\xi$, one has $$ \int_{B_r}|\nabla u|^2dx\le \frac{C}{r^2}\int_{B_{2r}}|u|^2 dx. $$</p>
Hyperbolic substitution for $\int\frac{dx}{x\sqrt{1-x^2}}$
2,088,318
6
https://math.stackexchange.com/questions/2088318/hyperbolic-substitution-for-int-fracdxx-sqrt1-x2
<blockquote> <p>Substitute $x=\sinh\theta$, $\cosh\theta$ or $\tanh\theta$. After integration change back to $x$.$$\int\frac{dx}{x\sqrt{1-x^2}}$$</p> </blockquote> <p>Substituting $x=\tanh\theta$, we have $$\begin{align} \int\frac{dx}{x\sqrt{1-x^2}}&amp;=\int\frac{\text{sech}^2\,\theta\,d\theta}{\tanh\theta\sqrt{1-\tanh^2\theta}}\\ &amp;=\int\text{csch }\theta\,d\theta\\ &amp;=-\ln|\text{csch }\theta+\text{coth }\theta|+C \end{align}$$ Here comes the problem. We can substitute back $\text{coth }\theta=\frac1x$ but what do we have for $\text{csch }\theta$? We have $\text{csch}^2\,\theta=\coth^2\theta-1$, but neither $\sqrt{\coth^2\theta-1}$ nor $-\sqrt{\coth^2\theta-1}$ seems to be a one-off solution for $\text{csch }\theta$. How should we proceed?</p>
[ { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" }, { "answer_id": 2088333, "score": 5, "text": "<p>$\\DeclareMathOperator{\\csch}{csch}\n\\DeclareMathOperator{\\sech}{sech}$\nTry\n$$\n\\csch(\\theta) = \\frac{1}{\\sinh(\\theta)} = \\frac{1/\\cosh(\\theta)}{\\sinh(\\theta)/\\cosh(\\theta)} = \\frac{\\sech(\\theta)}{\\tanh(\\theta)} = \\frac{\\sqrt{1 - \\tanh^2(\\theta)}}{\\tanh(\\theta)} \\, .\n$$</p>\n\n<p>How did I get this? By cheating, basically. The <a href=\"https://en.wikipedia.org/wiki/Lemniscate_of_Gerono\" rel=\"noreferrer\">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \\cos(\\varphi), y = \\sin(\\varphi) \\cos(\\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\\frac{\\sqrt{1 - x^2}}{x}$, which led me to this answer.</p>\n" }, { "answer_id": 4182985, "score": 0, "text": "<p>For positive <span class=\"math-container\">$x$</span>, another substitution that does not involve integrating <span class=\"math-container\">$\\csc{x}$</span> or <span class=\"math-container\">$\\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class=\"math-container\">$x^2-1$</span>), where <span class=\"math-container\">$x=\\operatorname{sech}{\\theta}$</span>, <span class=\"math-container\">$dx=-\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span>, and <span class=\"math-container\">$$\\sqrt{1-x^2}=\\sqrt{1-\\operatorname{sech}^2{\\theta}}=\\sqrt{\\tanh^2{\\theta}}=\\tanh{\\theta}$$</span> since <span class=\"math-container\">$\\theta\\geq0$</span> and <span class=\"math-container\">$\\tanh{x}$</span> is positive for that interval. <span class=\"math-container\">$$\\int{\\frac{dx}{x\\sqrt{1-x^2}}}=-\\int{\\frac{\\operatorname{sech}{\\theta}\\tanh{\\theta}}{\\operatorname{sech}{\\theta}\\tanh{\\theta}}d\\theta}=-\\int{d\\theta}=-\\theta+C$$</span> Substituting <span class=\"math-container\">$x$</span> back gives <span class=\"math-container\">$$-\\operatorname{arsech}{x}+C\\qquad x&gt;0$$</span>\nFor negative <span class=\"math-container\">$x$</span>, the substitution <span class=\"math-container\">$x=-\\operatorname{sech}{\\theta}$</span> and <span class=\"math-container\">$dx=\\operatorname{sech}{\\theta}\\tanh{\\theta}\\,d\\theta$</span> give <span class=\"math-container\">$$-\\operatorname{arsech}{\\left(-x\\right)}+C\\qquad x&lt;0$$</span> Combining the two branches gives <span class=\"math-container\">$$-\\operatorname{arsech}{\\left|x\\right|}+C$$</span></p>\n" } ]
Question: Hyperbolic substitution for $\int\frac{dx}{x\sqrt{1-x^2}}$ Answer 1: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 2: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 3: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 4: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 5: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 6: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 7: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 8: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 9: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 10: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 11: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 12: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 13: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 14: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 15: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 16: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 17: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 18: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 19: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 20: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 21: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 22: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p> Answer 23: <p>$\DeclareMathOperator{\csch}{csch} \DeclareMathOperator{\sech}{sech}$ Try $$ \csch(\theta) = \frac{1}{\sinh(\theta)} = \frac{1/\cosh(\theta)}{\sinh(\theta)/\cosh(\theta)} = \frac{\sech(\theta)}{\tanh(\theta)} = \frac{\sqrt{1 - \tanh^2(\theta)}}{\tanh(\theta)} \, . $$</p> <p>How did I get this? By cheating, basically. The <a href="https://en.wikipedia.org/wiki/Lemniscate_of_Gerono" rel="noreferrer">Wikipedia page for the lemniscate of Gerono</a> has different parametrizations of the curve, one of which is $x = \cos(\varphi), y = \sin(\varphi) \cos(\varphi)$. I used this parametrization to integrate, which gave me an expression involving $\frac{\sqrt{1 - x^2}}{x}$, which led me to this answer.</p> Answer 24: <p>For positive <span class="math-container">$x$</span>, another substitution that does not involve integrating <span class="math-container">$\csc{x}$</span> or <span class="math-container">$\operatorname{csch}{x}$</span> is a hyperbolic secant substitution (similar to a secant/cosecant substitution for <span class="math-container">$x^2-1$</span>), where <span class="math-container">$x=\operatorname{sech}{\theta}$</span>, <span class="math-container">$dx=-\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span>, and <span class="math-container">$$\sqrt{1-x^2}=\sqrt{1-\operatorname{sech}^2{\theta}}=\sqrt{\tanh^2{\theta}}=\tanh{\theta}$$</span> since <span class="math-container">$\theta\geq0$</span> and <span class="math-container">$\tanh{x}$</span> is positive for that interval. <span class="math-container">$$\int{\frac{dx}{x\sqrt{1-x^2}}}=-\int{\frac{\operatorname{sech}{\theta}\tanh{\theta}}{\operatorname{sech}{\theta}\tanh{\theta}}d\theta}=-\int{d\theta}=-\theta+C$$</span> Substituting <span class="math-container">$x$</span> back gives <span class="math-container">$$-\operatorname{arsech}{x}+C\qquad x&gt;0$$</span> For negative <span class="math-container">$x$</span>, the substitution <span class="math-container">$x=-\operatorname{sech}{\theta}$</span> and <span class="math-container">$dx=\operatorname{sech}{\theta}\tanh{\theta}\,d\theta$</span> give <span class="math-container">$$-\operatorname{arsech}{\left(-x\right)}+C\qquad x&lt;0$$</span> Combining the two branches gives <span class="math-container">$$-\operatorname{arsech}{\left|x\right|}+C$$</span></p>
$k$-jet transitivity of diffeomorphism group
2,088,706
6
https://math.stackexchange.com/questions/2088706/k-jet-transitivity-of-diffeomorphism-group
<p>Given a connected smooth manifold $M$ and an invertible jet $\xi \in {\rm inv} J^k_p(M,M)_q$, what are the required conditions for the existence of a diffeomorphism $\phi \in {\rm Diff}(M)$ such that $j^k_p \phi = \xi$? What about if we want $\phi$ to be isotopic to the identity? Given a smooth family $\xi_\epsilon \in {\rm inv} J^k_p(M,M)_q$ can we get a smooth family $\phi_\epsilon \in {\rm Diff}_0 (M)$?</p> <p>It's clear that we can find such a diffeomorphism from a neighbourhood of $p$ to a neighbourhood of $q$, and also that there must be some smooth map $M \to M$ contacting $\xi$, but I have no idea how to extend the former or make the latter a diffeomorphism. At least in low dimension it feels like the only condition should be that $\xi$ is orientation-preserving, but I have no idea how to establish this. </p>
[ { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" }, { "answer_id": 2090253, "score": 2, "text": "<p>I think that if you assume $\\dim(M)\\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p>\n\n<p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p>\n\n<p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p>\n" }, { "answer_id": 2091384, "score": 1, "text": "<p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\\psi)$ with $\\psi(p)=0$ and let $g : \\mathbb R^n \\to \\mathbb R^n$ be the unique $k^\\text{th}$ order polynomial such that $j^k_p (\\psi^{-1} g \\psi) = \\xi$. Note that $g(0) = 0$ because $\\xi \\in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p>\n\n<p>Define $g : (0,1] \\times B_R \\to \\mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\\lim_{t \\to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\\mathbb R)$ (here we need the condition that $\\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \\to \\mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \\times \\mathbb R^n \\to \\mathbb R^n$ from $g$ to $i$.</p>\n\n<p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\\psi(U)$ and $f(I\\times B_{r_1}) \\subset B_{r_2}$$^\\dagger$; and define the isotopy</p>\n\n<p>$$ F : [0,1] \\times \\psi^{-1}(B_{r_1}) \\to M$$\nby $F_t = \\psi^{-1} f_t \\psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\\xi$ at $p$.</p>\n\n<p>$\\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \\le C t g(x)$ for $t\\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\\mathbb R)$ can be chosen so that the operator norm is bounded.</p>\n" } ]
Question: $k$-jet transitivity of diffeomorphism group Answer 1: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 2: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 3: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 4: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 5: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 6: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 7: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 8: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 9: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 10: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 11: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 12: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 13: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 14: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 15: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 16: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 17: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 18: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 19: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 20: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p> Answer 21: <p>I think that if you assume $\dim(M)\geq 2$ (?) and that $M$ is connected (or at least that all connected components of $M$ are diffeomorphic), then you can always construct an appropriate diffeomorphism which is isotopic to the identity. I can't give a full proof, but I think I can describe how to get it: </p> <p>The main input I can give is reducing things to the case that $p=q$. This is done by choosing a smooth injective path $c:[0,1]\to M$ connecting $p$ and $q$, then extending the velocity field $c'$ to a vector field in a tubular neighborhood of $c([0,1])$ the path and then multiplying it by an appropriate bump function. This gives you a vector field with compact support (which thus has a global flow) and agrees with the given velocity field locally around the chosen path. Hence the flow of this vector field at time $t=1$ is a diffeomorphism (with compact support) that maps $p$ to $q$. </p> <p>Composing with this diffeomorphism you can reduce things to the case $q=p$, in which working with vector fields becomes much easier. Since we are now talking about diffeomorphisms fixing $p$, we can use vector fields vanishing at $p$, and any germ of such a vector field now easily extends to a globally defined, compactly supported vector field, which has a global flow. Now one could either try to study the map from jets of vector fields to jets of diffeomorphism induced by the local flow. The other possibility would be to argue step by step, showing next that any one-jet can be realized by an appropriate vector field and so on. </p> Answer 22: <p>As pointed out by Andreas we can assume $p=q$. Fix a chart $(U,\psi)$ with $\psi(p)=0$ and let $g : \mathbb R^n \to \mathbb R^n$ be the unique $k^\text{th}$ order polynomial such that $j^k_p (\psi^{-1} g \psi) = \xi$. Note that $g(0) = 0$ because $\xi \in J_p (M,M)_p$. Let $L$ be the linear part of $g$. Restrict the domain of $g$ to some ball $B_R(0)$ small enough that $g$ is an embedding.</p> <p>Define $g : (0,1] \times B_R \to \mathbb R^n$ by $g_t(x) = g(tx)/t$ and note that $\lim_{t \to 0} g_t = g'(0) = L$, so this $g_t$ defines an isotopy of $g$ to $L$. Any smooth path from $L$ to the identity in $GL(n,\mathbb R)$ (here we need the condition that $\xi$ is orientation-preserving!) is then an isotopy of $L$ to the inclusion map $i :B_R \to \mathbb R^n$; so composing these two isotopies we get a diffeotopy $f : [0,1] \times \mathbb R^n \to \mathbb R^n$ from $g$ to $i$.</p> <p>Now, let $r_1,r_2 &lt; R$ be small enough that the origin-centred balls $B_{r_1},B_{r_2}$ lie inside $\psi(U)$ and $f(I\times B_{r_1}) \subset B_{r_2}$$^\dagger$; and define the isotopy</p> <p>$$ F : [0,1] \times \psi^{-1}(B_{r_1}) \to M$$ by $F_t = \psi^{-1} f_t \psi$. Then applying the isotopy extension theorem (see e.g. Hirsch section 8.1) we get a diffeotopy of $M$ that agrees with $F$ on a neighbourhood of $p$, and thus whose endpoint has jet $\xi$ at $p$.</p> <p>$\dagger$ We can pull this off because 0-fixing polynomials satisfy $g(tx) \le C t g(x)$ for $t\in[0,1]$ and $C$ independent of $t$, so the dilation isotopy can't make the image blow up; and similarly the path in $GL(n,\mathbb R)$ can be chosen so that the operator norm is bounded.</p>
Cute limit: $\lim\limits_{x \to 0^{+}}\frac{\sin(x)^x − x^ {\sin(x)}}{\tan(x) ^x − x^{\tan (x)}}$
2,088,774
6
https://math.stackexchange.com/questions/2088774/cute-limit-lim-limits-x-to-0-frac-sinxx-%e2%88%92-x-sinx-tanx-x
<p>What's the fastest way to calculate the below limit (ideally, without resorting to Taylor expansions)?</p> <p>$$\lim_{x \to 0^{+}}\frac{\sin(x)^x − x^ {\sin(x)}}{\tan(x) ^x − x^{\tan (x)}}$$</p>
[ { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" }, { "answer_id": 2088904, "score": 8, "text": "<p>The Mean Value Theorem says\n$$\n\\frac{e^x-e^y}{x-y}=e^\\xi\n$$\nfor some $\\xi$ between $x$ and $y$. Therefore,\n$$\n\\begin{align}\n\\lim_{x\\to0^+}\\frac{\\sin(x)^x−x^{\\sin(x)}}{\\tan(x)^x−x^{\\tan(x)}}\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(\\sin(x))-\\sin(x)\\log(x)}{x\\log(\\tan(x))-\\tan(x)\\log(x)}\\tag{1}\\\\[6pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x\\log(x)+x\\log\\left(\\frac{\\sin(x)}{x}\\right)-\\sin(x)\\log(x)}{x\\log(x)+x\\log\\left(\\frac{\\tan(x)}{x}\\right)-\\tan(x)\\log(x)}\\tag{2}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}\\log\\left(\\frac{\\tan(x)}{x}\\right)}\\tag{3}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\sin(x)}{x}\\right)}{x-\\tan(x)+\\frac{x}{\\log(x)}O\\!\\left(\\frac{x-\\tan(x)}{x}\\right)}\\tag{4}\\\\\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)+O\\!\\left(\\frac{x-\\sin(x)}{\\log(x)}\\right)}{x-\\tan(x)+O\\!\\left(\\frac{x-\\tan(x)}{\\log(x)}\\right)}\\tag{5}\\\\[9pt]\n&amp;=\\lim_{x\\to0^+}\\frac{x-\\sin(x)}{x-\\tan(x)}\\tag{6}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}\\frac{1-\\cos(x)}{1-\\sec^2(x)}\\tag{7}\\\\[15pt]\n&amp;=\\lim_{x\\to0^+}-\\frac{\\cos^2(x)}{1+\\cos(x)}\\tag{8}\\\\[15pt]\n&amp;=-\\frac12\\tag{9}\n\\end{align}\n$$\nExplanation:<br>\n$(1)$: Mean Value Theorem<br>\n$(2)$: $\\log(ab)=\\log(a)+\\log(b)$<br>\n$(3)$: divide numerator and denominator by $\\log(x)$<br>\n$(4)$: $\\log(1+x)=O(x)$<br>\n$(5)$: algebra<br>\n$(6)$: $O\\!\\left({\\raise{1.5pt}\\frac{u}{\\log(x)}}\\right)=o(u)$ as $x\\to0^+$<br>\n$(7)$: L'Hôpital<br>\n$(8)$: multiply numerator and denominator by $\\frac{\\cos^2(x)}{1-\\cos(x)}$<br>\n$(9)$: evaluate at $x=0$</p>\n" }, { "answer_id": 2088893, "score": 4, "text": "<p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\\lim_{x \\to 0}\\frac{x - \\sin x}{x^{3}} = \\frac{1}{6},\\,\\lim_{x \\to 0}\\frac{x - \\tan x}{x^{3}} = -\\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p>\n\n<p>We have then\n\\begin{align}\nL &amp;= \\lim_{x \\to 0^{+}}\\frac{\\sin^{x}x - x^{\\sin x}}{\\tan^{x}x - x^{\\tan x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(\\sin x\\log x)}{\\exp(\\tan x\\log x)}\\cdot\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{\\exp(x\\log\\sin x - \\sin x\\log x) - 1}{x\\log\\sin x - \\sin x\\log x}\\cdot\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\cdot\\frac{x\\log\\tan x - \\tan x\\log x}{\\exp(x\\log\\tan x - \\tan x\\log x) - 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x\\log\\sin x - \\sin x\\log x}{x\\log\\tan x - \\tan x\\log x}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{(x - \\sin x)\\log x + x\\log\\dfrac{\\sin x}{x}}{(x - \\tan x)\\log x + x\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\sin x - x}\\log\\dfrac{\\sin x}{x}}{1 - \\dfrac{1}{\\log x}\\dfrac{x}{\\tan x - x}\\log\\dfrac{\\tan x}{x}}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\frac{x - \\sin x}{x - \\tan x}\\cdot\\dfrac{1 - 0\\cdot 1}{1 - 0\\cdot 1}\\notag\\\\\n&amp;= \\lim_{x \\to 0^{+}}\\dfrac{\\dfrac{x - \\sin x}{x^{3}}}{\\dfrac{x - \\tan x}{x^{3}}}\\notag\\\\\n&amp;= -\\frac{1}{2}\\notag\n\\end{align}</p>\n\n<hr>\n\n<p>We have used the following standard limits in the derivation above $$\\lim_{x \\to 0}\\frac{\\sin x}{x} = \\lim_{x \\to 0}\\frac{\\tan x}{x} = \\lim_{x \\to 0}\\frac{\\log(1 + x)}{x} = \\lim_{x \\to 0}\\frac{\\exp(x) - 1}{x} = 1,\\lim_{x \\to 0^{+}}x\\log x = 0$$ Note that $\\sin x\\log x = \\dfrac{\\sin x}{x}\\cdot x \\log x \\to 1\\cdot 0 = 0$ as $x \\to 0^{+}$ and similarly $\\tan x\\log x \\to 0$. Further $x\\log\\sin x = x\\log x + x\\log\\dfrac{\\sin x}{x} \\to 0 + 0\\cdot \\log 1 = 0$ as $x \\to 0^{+}$ and similarly $x\\log\\tan x \\to 0$. Also note that if $t = (\\sin x - x)/x$ then $t \\to 0$ as $x \\to 0^{+}$ and hence $$\\frac{x}{\\sin x - x}\\log\\frac{\\sin x}{x} = \\frac{\\log(1 + t)}{t} \\to 1$$ and in similar manner $$\\frac{x}{\\tan x - x}\\log\\frac{\\tan x}{x} \\to 1$$</p>\n\n<p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>\n" } ]
Question: Cute limit: $\lim\limits_{x \to 0^{+}}\frac{\sin(x)^x − x^ {\sin(x)}}{\tan(x) ^x − x^{\tan (x)}}$ Answer 1: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 2: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 3: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 4: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 5: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 6: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 7: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 8: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 9: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 10: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 11: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 12: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 13: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 14: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 15: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 16: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 17: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 18: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 19: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 20: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p> Answer 21: <p>The Mean Value Theorem says $$ \frac{e^x-e^y}{x-y}=e^\xi $$ for some $\xi$ between $x$ and $y$. Therefore, $$ \begin{align} \lim_{x\to0^+}\frac{\sin(x)^x−x^{\sin(x)}}{\tan(x)^x−x^{\tan(x)}} &amp;=\lim_{x\to0^+}\frac{x\log(\sin(x))-\sin(x)\log(x)}{x\log(\tan(x))-\tan(x)\log(x)}\tag{1}\\[6pt] &amp;=\lim_{x\to0^+}\frac{x\log(x)+x\log\left(\frac{\sin(x)}{x}\right)-\sin(x)\log(x)}{x\log(x)+x\log\left(\frac{\tan(x)}{x}\right)-\tan(x)\log(x)}\tag{2}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}\log\left(\frac{\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}\log\left(\frac{\tan(x)}{x}\right)}\tag{3}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\sin(x)}{x}\right)}{x-\tan(x)+\frac{x}{\log(x)}O\!\left(\frac{x-\tan(x)}{x}\right)}\tag{4}\\ &amp;=\lim_{x\to0^+}\frac{x-\sin(x)+O\!\left(\frac{x-\sin(x)}{\log(x)}\right)}{x-\tan(x)+O\!\left(\frac{x-\tan(x)}{\log(x)}\right)}\tag{5}\\[9pt] &amp;=\lim_{x\to0^+}\frac{x-\sin(x)}{x-\tan(x)}\tag{6}\\[15pt] &amp;=\lim_{x\to0^+}\frac{1-\cos(x)}{1-\sec^2(x)}\tag{7}\\[15pt] &amp;=\lim_{x\to0^+}-\frac{\cos^2(x)}{1+\cos(x)}\tag{8}\\[15pt] &amp;=-\frac12\tag{9} \end{align} $$ Explanation:<br> $(1)$: Mean Value Theorem<br> $(2)$: $\log(ab)=\log(a)+\log(b)$<br> $(3)$: divide numerator and denominator by $\log(x)$<br> $(4)$: $\log(1+x)=O(x)$<br> $(5)$: algebra<br> $(6)$: $O\!\left({\raise{1.5pt}\frac{u}{\log(x)}}\right)=o(u)$ as $x\to0^+$<br> $(7)$: L'Hôpital<br> $(8)$: multiply numerator and denominator by $\frac{\cos^2(x)}{1-\cos(x)}$<br> $(9)$: evaluate at $x=0$</p> Answer 22: <p>To expand on my comment, I will use the following two limits apart from the standard lmits: $$\lim_{x \to 0}\frac{x - \sin x}{x^{3}} = \frac{1}{6},\,\lim_{x \to 0}\frac{x - \tan x}{x^{3}} = -\frac{1}{3}$$ Both the above are easily obtained either via Taylor series or via L'Hospital's Rule.</p> <p>We have then \begin{align} L &amp;= \lim_{x \to 0^{+}}\frac{\sin^{x}x - x^{\sin x}}{\tan^{x}x - x^{\tan x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(\sin x\log x)}{\exp(\tan x\log x)}\cdot\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{\exp(x\log\sin x - \sin x\log x) - 1}{x\log\sin x - \sin x\log x}\cdot\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\cdot\frac{x\log\tan x - \tan x\log x}{\exp(x\log\tan x - \tan x\log x) - 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x\log\sin x - \sin x\log x}{x\log\tan x - \tan x\log x}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{(x - \sin x)\log x + x\log\dfrac{\sin x}{x}}{(x - \tan x)\log x + x\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - \dfrac{1}{\log x}\dfrac{x}{\sin x - x}\log\dfrac{\sin x}{x}}{1 - \dfrac{1}{\log x}\dfrac{x}{\tan x - x}\log\dfrac{\tan x}{x}}\notag\\ &amp;= \lim_{x \to 0^{+}}\frac{x - \sin x}{x - \tan x}\cdot\dfrac{1 - 0\cdot 1}{1 - 0\cdot 1}\notag\\ &amp;= \lim_{x \to 0^{+}}\dfrac{\dfrac{x - \sin x}{x^{3}}}{\dfrac{x - \tan x}{x^{3}}}\notag\\ &amp;= -\frac{1}{2}\notag \end{align}</p> <hr> <p>We have used the following standard limits in the derivation above $$\lim_{x \to 0}\frac{\sin x}{x} = \lim_{x \to 0}\frac{\tan x}{x} = \lim_{x \to 0}\frac{\log(1 + x)}{x} = \lim_{x \to 0}\frac{\exp(x) - 1}{x} = 1,\lim_{x \to 0^{+}}x\log x = 0$$ Note that $\sin x\log x = \dfrac{\sin x}{x}\cdot x \log x \to 1\cdot 0 = 0$ as $x \to 0^{+}$ and similarly $\tan x\log x \to 0$. Further $x\log\sin x = x\log x + x\log\dfrac{\sin x}{x} \to 0 + 0\cdot \log 1 = 0$ as $x \to 0^{+}$ and similarly $x\log\tan x \to 0$. Also note that if $t = (\sin x - x)/x$ then $t \to 0$ as $x \to 0^{+}$ and hence $$\frac{x}{\sin x - x}\log\frac{\sin x}{x} = \frac{\log(1 + t)}{t} \to 1$$ and in similar manner $$\frac{x}{\tan x - x}\log\frac{\tan x}{x} \to 1$$</p> <p>The above is a reasonably fast way (though not as good as the use of Taylor series) to evaluate this cute limit and the real effort is in typing the full solution. The limits mentioned in the beginning of the answer are already famous and play a key role here apart from the standard limits mentioned in the later part of the answer.</p>